Dog-Breed-Classification

Overview

The goal of this project is to provide a working prototype of a system which is capable of discriminating between dogs and humans. System shouldn't just choose between two classes dog and human but also return none class for each different image it classifies.

If image is labeled as dog or human, the system should provide next classification which is an assigment of the post similar dog-breed class to the given image.

There is no minimum requirement of how many types of dog breeds the system needs to support.

Problem Statement

Project can be divided into following steps.

Data

Udacity has provided following datasets for building the prototype:

Small self-made dataset containing 20 testing images (5 dogs, 5 humans, 10 non-dogs & non-humans).

Finding best human detector

Model or vision system capable of determining whether human is on image or not.

Finding best dog detector

Model or vision system capable of determining whether dog is on image or not.

Finding best dog-breed detector

Model or vision system capable of attaching most similar dog-breed to given image.

System assembly

There is no single way of completing this task. The system could be either system combining outputs of 3 models or for example: one model detecting dogs and humans on the image and returning probability of being either of them for both. If probability would be below threshold then system would return none class, otherwise system would return either dog or human class. Then the image would be directed to next model, which determines dog-breed.


Transfer learning will greatly improve performance of all detectors.

Metrics

As this is classification problem, proper metrics to assess the system capabilities should be either accuracy and/or f1-score.

Machine

Project is prepared on my local machine equipped with: GeForce GTX 1070Ti (https://www.nvidia.com/pl-pl/geforce/products/10series/geforce-gtx-1070/)

Code

Import

In [1]:
import time
import os
import shutil
import random
import zipfile
import warnings
import urllib.request
from glob import glob
from copy import deepcopy
from collections import Counter

import cv2
import requests
import numpy as np
import pandas as pd
from tqdm import tqdm
import matplotlib.pyplot as plt
import matplotlib.image as mpimg

from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

from sklearn.datasets import load_files 
from sklearn.metrics import f1_score, accuracy_score
from sklearn.exceptions import UndefinedMetricWarning

from keras.applications.resnet50 import ResNet50
from keras.applications.resnet50 import preprocess_input as resnet50_preprocess_input
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input as vgg16_preprocess_input
from keras.applications.vgg19 import VGG19
from keras.applications.vgg19 import preprocess_input as vgg19_preprocess_input
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import preprocess_input as inceptionv3_preprocess_input

from keras.preprocessing import image
from keras.utils import np_utils, to_categorical
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, save_model, load_model
from keras.layers import Activation, Conv2D, MaxPool2D, Dense, Flatten, Dropout, GlobalAveragePooling2D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.initializers import glorot_uniform
from keras.callbacks import ModelCheckpoint  
from keras.engine import  Model

%matplotlib inline
Using TensorFlow backend.

Constant

In [2]:
GLOBAL_SEED = 300919902019

DATA_DIR = os.path.join("data")

AIND_DOG_URL = "https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip"
AIND_DOG_DIR = os.path.join(DATA_DIR, "aind_dog_images")
AIND_DOG_CLASSES_NUM = 133

AIND_HUMAN_DIR = os.path.join("data", "aind_human_images")
AIND_HUMAN_URL = "https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip"

TEST_IMAGES_DIR = os.path.join("data", "test_images")
TEST_IMAGES_URL = "https://www.dropbox.com/s/qdam34r8x0lajc7/test_images.zip?dl=1"

OPENCV_HAAR_CASCADES_DIR = "haarcascades"
OPENCV_GIT_BASEPATH = "https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/"
OPENCV_HAAR_CASCADES = {
    "haarcascade_frontalcatface.xml": OPENCV_GIT_BASEPATH + "haarcascade_frontalcatface.xml",
    "haarcascade_frontalcatface_extended.xml": OPENCV_GIT_BASEPATH + "haarcascade_frontalcatface_extended.xml",
    "haarcascade_frontalface_alt.xml": OPENCV_GIT_BASEPATH + "haarcascade_frontalcatface_extended.xml",
    "haarcascade_frontalface_alt2.xml": OPENCV_GIT_BASEPATH + "haarcascade_frontalface_alt2.xml",
    "haarcascade_frontalface_alt_tree.xml": OPENCV_GIT_BASEPATH + "haarcascade_frontalface_alt_tree.xml",
    "haarcascade_frontalface_default.xml": OPENCV_GIT_BASEPATH + "haarcascade_frontalface_default.xml"
}

SAVED_MODELS_DIR = "saved_models"

PREDICTOR_TYPE_HUMAN = "human"
PREDICTOR_TYPE_DOG = "dog"
PREDICTOR_TYPE_DOG_BREED = "dog_breed"

Predictor

In [3]:
predictors = {
    PREDICTOR_TYPE_HUMAN: {},
    PREDICTOR_TYPE_DOG: {},
    PREDICTOR_TYPE_DOG_BREED: {}
}
In [4]:
class Predictor:
    """Class for normalizing API for each model type. It is capable of making 
    predictions on data."""
    
    def __init__(self, predictor, label, **kwargs):
        """Constructor for predictor class.

        Parameters:
        -----------
        predictor: function
            Function taking two arguments: (img: str), (kwargs: **kwargs). It uses model 
            stored in 'kwargs' and it's preprocessing functions to load image based on 
            'img' filepath and returns prediction.
        label: str
            Description for stored model.
        kwargs: **kwargs
            Container with model and it's preprocessing functions. It is stored inside
            prediction and used by 'predictor' function. Keys depends on how 'predicor'
            was designed.
       
        Returns:
        -----------
        None
        """
        self.predictor = predictor
        self.label = label
        self.kwargs = kwargs
        
    def predict(self, img):
        """For given image filepath or list of image filepaths, returns predictions. 
        Informs about prediction process with progress bar.

        Parameters:
        -----------
        img: list or numpy.ndarray
            Iterable container with img filepaths.

        Returns:
        -----------
        result: numpy.ndarray
            Numpy array with predictions for each image.
        """
        if isinstance(img, list) or isinstance(img, np.ndarray): 
            desc = self.label.rjust(50)
            result = [self.predictor(i, **self.kwargs) for i in tqdm(img, desc=desc)]
            return np.array(result)
        elif isinstance(img, str):
            result = [self.predictor(img, **self.kwargs)]
            return np.array(result)
        else:  
            raise Exception("Invalid data format!")
        
def test_binary_predictors(predictors, expected_true_data, expected_false_data):
    """Pipeline for testing predictors which are capable of returning only binary
    values: 0 and 1. For each predictor inside dictionary it makes prediction on
    given data and transforms the results into pd.DataFrame so the models can
    be compared.

    Parameters:
    -----------
    predictors: dict
        Dictionary containing predictor classes, where key is the label of predictor.
    expected_true_data: list or ndarray
        Iterable containing path to images for which prediction class should be 1.
    expected_false_data: list or ndarray
        Iterable containing path to images for which prediction class should be 0.

    Returns:
    -----------
    report: pd.DataFrame
        DataFrame containing f1_score, accuracy, classification report, for all 
        tested predictors.
    """
    results = {
        "accuracy": [], "f1_score": [], "true_positive": [],
        "true_negative": [], "false_positive": [], "false_negative": []
    }
    
    expected_result = np.concatenate(
        [np.ones(len(expected_true_data)), np.zeros(len(expected_false_data))]
    )
    
    for label, predictor in predictors.items():
        pred_true = predictor.predict(expected_true_data)
        pred_false = predictor.predict(expected_false_data)
        result = np.concatenate([pred_true, pred_false])
        
        accuracy = accuracy_score(result, expected_result)
        f1 = f1_score(result, expected_result)
        tp = pred_true.sum()
        tn = len(pred_true) - pred_true.sum()
        fp = len(pred_false) - pred_false.sum()
        fn = pred_false.sum()
        
        results["accuracy"].append(accuracy)
        results["f1_score"].append(f1)
        results["true_positive"].append(tp)
        results["true_negative"].append(tn)
        results["false_positive"].append(fp)
        results["false_negative"].append(fn)
        
    report = pd.DataFrame(results, index=list(predictors.keys()))
    return report

def test_multiclass_predictors(predictors, data, labels):
    """Pipeline for testing predictors which are capable of returning mutliclass
    predictions. For each predictor inside dictionary it makes prediction on
    given data and transforms the results into pd.DataFrame so the models can
    be compared.

    Parameters:
    -----------
    predictors: dict
        Dictionary containing predictor classes, where key is the label of predictor.
    data: list or numpy.ndarray
        Iterable containing path to images.
    labels: list or numpy.ndarray
        Iterable containing expected classes for each image.

    Returns:
    -----------
    report: pd.DataFrame
        DataFrame containing f1_score, accuracy, for all tested predictors.
    """
    results = {"accuracy": [], "f1_score": []}
    
    expected_result = np.argmax(labels, axis=1)
    
    for label, predictor in predictors.items():
        result = predictor.predict(data)
      
        accuracy = accuracy_score(result, expected_result)
        
        with warnings.catch_warnings():
            warnings.filterwarnings("ignore", category=UndefinedMetricWarning)
            f1 = f1_score(result, expected_result, average="weighted")
    
        results["accuracy"].append(accuracy)
        results["f1_score"].append(f1)
        
    return pd.DataFrame(results, index=list(predictors.keys()))

Loading Data

In [5]:
def _download_zip_data(url, path):
    """Function downloads .zip file from specific url and unzips it. After that it renames 
    the unzipped directory according to folder sent in path. It will remove destination
    directory if already exists.
    
    Parameters:
    -----------
    url: str
        Url to file with dataset.
    path: str
        Absolute path containing folder, located inside DATA_DIR, to where data will
        be downloaded.
        
    Returns:
    -----------
    None
    """
    filename = url.split("/")[-1]
    filepath = os.path.join(DATA_DIR, filename)
    
    if os.path.exists(path):
        print("\t- data folder already exists! Cleaning...")
        shutil.rmtree(path)

    urllib.request.urlretrieve(url, filepath)
    print("\t- fetched file: {}".format(filepath))

    with zipfile.ZipFile(filepath, "r") as f:
        f.extractall(DATA_DIR)

    unzip_dir = filepath.split(".")[-1]
    print("\t- unzipped file to: {}".format(unzip_dir))

    src, dst = filepath.split(".")[:-1][0], path
    os.rename(src, dst)
    print("\t- renamed '{}' directory to '{}'".format(src, dst))

    os.remove(filepath) 
    print("\t- removed '{}' file".format(filepath))

def load_aind_dog_data(url, path, use_cache=True):
    """Function performs loading of Udacity provided dog dataset which specific 
    data structure, adjusted for sklearn.datasets.load_files function. It loads
    data from cache if it already exists. Otherwise data is downloaded and 
    unzipped beforehand.
    
    Parameters:
    -----------
    url: str
        Url to file with dataset.
    path: str
        Absolute path containing folder, located inside DATA_DIR, to where data will
        be downloaded.
    use_cache: bool
        Flag for forcing download process even if data exists.
        
    Returns:
    -----------
    train_data: tuple
        Two element tuple containing train image fllepaths and train targets in 
        numpy.ndarray format.
    val_data: tuple
        Two element tuple containing validation image fllepaths and val targets in 
        numpy.ndarray format.
    test_data: tuple
        Two element tuple containing test image fllepaths and test targets in 
        numpy.ndarray format.
    """
    if use_cache == False:
        print("Flag 'use_cache' set to 'False', downloading data.")
        _download_zip_data(url, path)
    elif not os.path.exists(path):
        print("Path '{}' not found, downloading data.".format(path))
        _download_zip_data(url, path)
    else:
        print("Path '{}' found, loading cached data.".format(path))
        
    def _load_dataset(path):
        data = load_files(path)
        dog_files = np.array(data["filenames"])
        dog_targets = np_utils.to_categorical(
            np.array(data["target"]), AIND_DOG_CLASSES_NUM)
        return dog_files, dog_targets
    
    train_filepath = os.path.join(path, "train")
    train_data = _load_dataset(train_filepath)
    
    val_filepath = os.path.join(path, "valid")
    val_data = _load_dataset(val_filepath)
    
    test_filepath = os.path.join(path, "test")
    test_data = _load_dataset(test_filepath)
    
    return train_data, val_data, test_data

def load_aind_human_data(url, path, use_cache=True): 
    """Function performs loading of Udacity provided human dataset. It loads
    data from cache if it already exists. Otherwise data is downloaded and 
    unzipped beforehand. 
    
    Parameters:
    -----------
    url: str
        Url to file with dataset.
    path: str
        Absolute path containing folder, located inside DATA_DIR, to where data will
        be downloaded.
    use_cache: bool
        Flag for forcing download process even if data exists.
        
    Returns:
    -----------
    train_data: numpy.ndarray
        Container with train image filepaths. Ratio 0.6.
    val_data: numpy.ndarray
        Container with validation image filepaths. Ratio 0.2.
    test_data: numpy.ndarray
        Container with test image filepaths. Ratio 0.2.
    """
    if use_cache == False:
        print("Flag 'use_cache' set to 'False', downloading data.")
        _download_zip_data(url, path)
    elif not os.path.exists(path):
        print("Path '{}' not found, downloading data.".format(path))
        _download_zip_data(url, path)
    else:
        print("Path '{}' found, loading cached data.".format(path))

    data = np.array(glob(AIND_HUMAN_DIR + "*/*/*"))
    random.shuffle(data)
    
    train_split_index = int(data.shape[0] * 0.8)
    train_data = data[:train_split_index]
    test_data = data[train_split_index:]
    
    val_split_index = int(test_data.shape[0] * 0.5)
    val_data = test_data[:val_split_index]
    test_data = test_data[val_split_index:]
    
    return train_data, val_data, test_data

def load_test_images(url, path, use_cache=True):
    """Function performs loading of self-made test dataset. It loads
    data from cache if it already exists. Otherwise data is downloaded and 
    unzipped beforehand.
    
    Parameters:
    -----------
    url: str
        Url to file with dataset.
    path: str
        Absolute path containing folder, located inside DATA_DIR, to where data will
        be downloaded.
    use_cache: bool
        Flag for forcing download process even if data exists.
        
    Returns:
    -----------
    test_data: numpy.ndarray
        Array containing paths to test images.
    """
    if use_cache == False:
        print("Flag 'use_cache' set to 'False', downloading data.")
        _download_zip_data(url, path)
    elif not os.path.exists(path):
        print("Path '{}' not found, downloading data.".format(path))
        _download_zip_data(url, path)
    else:
        print("Path '{}' found, loading cached data.".format(path))
    
    test_data = np.array(glob(TEST_IMAGES_DIR + "*/*"))
    return test_data

def download_haar_cascade(url, path, use_cache=True):
    """Function downloads .xml file from specific url and saves it to specified directory. 
    
    Parameters:
    -----------
    url: str
        Url to file with dataset.
    path: str
        Path to folder where data will be downloaded.
    use_cache: bool
        Flag for forcing download process even if data exists.
        
    Returns:
    -----------
    None
    """
    def _get_data(url, path):
        response = requests.get(url)
        with open(path, "wb") as file:
            file.write(response.content)
    
    if use_cache == False:
        print("Flag 'use_cache' set to 'False', downloading file.")
        _get_data(url, path)
    elif not os.path.exists(path):
        print("File '{}' not found, downloading file.".format(path))
        _get_data(url, path)
    else:
        print("File '{}' already exists.".format(path))
        
def path_to_tensor(img_path, img_size=(224, 224)):
    """Function takes image from specific filepath, resizes it and saves as numpy.ndarray.
    
    Parameters:
    -----------
    img_path: str
        Filepath to image file.
    img_size: tuple
        Tuple to which loaded image will be resized.
        
    Returns:
    -----------
    img: numpy.ndarray
        Returns loaded and resized image.
    """
    img = image.load_img(img_path, target_size=img_size)
    img = image.img_to_array(img)
    img = np.expand_dims(img, axis=0)
    return img

def paths_to_tensor(img_paths):
    """Wrapper for 'path_to_tensor' function. Takes list of filepaths and loads all 
    of them into numpy.ndarray. 
    
    Parameters:
    -----------
    img_paths: list
        List of image filepaths.
        
    Returns:
    -----------
    list_of_tensors: numpy.ndarray
        Returns loaded and resized images.
    """
    list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
    list_of_tensors =  np.vstack(list_of_tensors)
    return list_of_tensors

Data Preview

In [35]:
def display_dog_class_balance(dog_data_dir):
    """Displays dog breed class counts on barplot.
    
    Parameters:
    -----------
    dog_data_dir: str
        String to folder where all images are contained.
        
    Returns:
    -----------
    None
    """
    all_dog_image_filepaths = np.array(glob(AIND_DOG_DIR + "*/*/*/*"))
    dog_breeds = [s.split("/")[-2].split(".")[-1] for s in all_dog_image_filepaths]
    counter = Counter(dog_breeds)
    
    
    labels, y = [t[0] for t in counter.most_common()], [t[1] for t in counter.most_common()]
    idx = range(0, len(labels))
    
    plt.figure(figsize=(16, 4))
    plt.bar(idx, y)
    plt.title("Dog Breed Class Balance")
    plt.ylabel("Image Count")
    plt.xlabel("Unique Dog Breed Id")
    plt.xticks(range(0, 140, 5))
    plt.grid(c="#444444", linestyle='--', linewidth=1, alpha=0.2)
    plt.gca().spines["top"].set_visible(False)
    plt.gca().spines["right"].set_visible(False)
    plt.gca().spines["bottom"].set_visible(False)
    plt.gca().spines["left"].set_visible(False)
    plt.show()
    
def display_random_image_grid(image_filepaths, rows=5, cols=5):
    """Displays randomly picked images from dataset.
    
    Parameters:
    -----------
    image_filepaths: list
        List of image filepaths.
        
    Returns:
    -----------
    None
    """
    fig, axes = plt.subplots(ncols=cols, nrows=rows, constrained_layout=True)
    fig.set_size_inches(16, 16)
    
    indices = np.arange(len(image_filepaths))
    random.shuffle(indices)
    indices = indices[:(rows*cols)]
    
    col, row = 0, 0
    for i, idx in enumerate(indices):
        if col >= cols:
            col = 0
            row = row +1
        
        filepath = image_filepaths[idx]
        filename = filepath.split(os.path.sep)[-1]
        img = mpimg.imread(filepath)
        axes[col][row].set_title(filename)
        axes[col][row].imshow(img)    
        
        col = col + 1
    
def display_training_history(history, label):
    """Displays model training history on plot.
    
    Parameters:
    -----------
    history: keras.callbacks.History
        Class containing information about model training process.
        
    Returns:
    -----------
    None
    """
    train_error = history.history["loss"]
    val_error = history.history["val_loss"]

    plt.figure(figsize=(16, 4))
    plt.plot(train_error, label="Train Samples Error")
    plt.plot(val_error, label="Validation Samples Error")
    plt.title("{} - Training History".format(label))
    plt.ylabel("Cross Entropy Error")
    plt.xlabel("Epoch")
    plt.xticks(range(0, len(train_error), 10))
    plt.grid(c="#444444", linestyle='--', linewidth=1, alpha=0.2)
    plt.legend()
    plt.gca().spines["top"].set_visible(False)
    plt.gca().spines["right"].set_visible(False)
    plt.gca().spines["bottom"].set_visible(False)
    plt.gca().spines["left"].set_visible(False)
    plt.show()

Assembly

In [7]:
random.seed(GLOBAL_SEED)

Data is loaded as list of paths to image files. Storing all images in memory at once can cause out of memory error.

Loading: aind_dog_images

In [8]:
dog_train, dog_val, dog_test = load_aind_dog_data(AIND_DOG_URL, AIND_DOG_DIR)
Path 'data/aind_dog_images' found, loading cached data.
In [9]:
print("Image number/targets in datasets:")
for label, data in zip(["train", "val", "test"], [dog_train, dog_val, dog_test]):
    inputs, targets = data
    print(" - {}: {}, {}".format(label, inputs.shape[0], targets.shape[1]))
Image number/targets in datasets:
 - train: 6680, 133
 - val: 835, 133
 - test: 836, 133
In [10]:
class_order = [item[20:-1].split(".")[-1] for item in sorted(glob(AIND_DOG_DIR + "/train/*/"))]
print("Dog breeds: \n{}".format(class_order))
Dog breeds: 
['Affenpinscher', 'Afghan_hound', 'Airedale_terrier', 'Akita', 'Alaskan_malamute', 'American_eskimo_dog', 'American_foxhound', 'American_staffordshire_terrier', 'American_water_spaniel', 'Anatolian_shepherd_dog', 'Australian_cattle_dog', 'Australian_shepherd', 'Australian_terrier', 'Basenji', 'Basset_hound', 'Beagle', 'Bearded_collie', 'Beauceron', 'Bedlington_terrier', 'Belgian_malinois', 'Belgian_sheepdog', 'Belgian_tervuren', 'Bernese_mountain_dog', 'Bichon_frise', 'Black_and_tan_coonhound', 'Black_russian_terrier', 'Bloodhound', 'Bluetick_coonhound', 'Border_collie', 'Border_terrier', 'Borzoi', 'Boston_terrier', 'Bouvier_des_flandres', 'Boxer', 'Boykin_spaniel', 'Briard', 'Brittany', 'Brussels_griffon', 'Bull_terrier', 'Bulldog', 'Bullmastiff', 'Cairn_terrier', 'Canaan_dog', 'Cane_corso', 'Cardigan_welsh_corgi', 'Cavalier_king_charles_spaniel', 'Chesapeake_bay_retriever', 'Chihuahua', 'Chinese_crested', 'Chinese_shar-pei', 'Chow_chow', 'Clumber_spaniel', 'Cocker_spaniel', 'Collie', 'Curly-coated_retriever', 'Dachshund', 'Dalmatian', 'Dandie_dinmont_terrier', 'Doberman_pinscher', 'Dogue_de_bordeaux', 'English_cocker_spaniel', 'English_setter', 'English_springer_spaniel', 'English_toy_spaniel', 'Entlebucher_mountain_dog', 'Field_spaniel', 'Finnish_spitz', 'Flat-coated_retriever', 'French_bulldog', 'German_pinscher', 'German_shepherd_dog', 'German_shorthaired_pointer', 'German_wirehaired_pointer', 'Giant_schnauzer', 'Glen_of_imaal_terrier', 'Golden_retriever', 'Gordon_setter', 'Great_dane', 'Great_pyrenees', 'Greater_swiss_mountain_dog', 'Greyhound', 'Havanese', 'Ibizan_hound', 'Icelandic_sheepdog', 'Irish_red_and_white_setter', 'Irish_setter', 'Irish_terrier', 'Irish_water_spaniel', 'Irish_wolfhound', 'Italian_greyhound', 'Japanese_chin', 'Keeshond', 'Kerry_blue_terrier', 'Komondor', 'Kuvasz', 'Labrador_retriever', 'Lakeland_terrier', 'Leonberger', 'Lhasa_apso', 'Lowchen', 'Maltese', 'Manchester_terrier', 'Mastiff', 'Miniature_schnauzer', 'Neapolitan_mastiff', 'Newfoundland', 'Norfolk_terrier', 'Norwegian_buhund', 'Norwegian_elkhound', 'Norwegian_lundehund', 'Norwich_terrier', 'Nova_scotia_duck_tolling_retriever', 'Old_english_sheepdog', 'Otterhound', 'Papillon', 'Parson_russell_terrier', 'Pekingese', 'Pembroke_welsh_corgi', 'Petit_basset_griffon_vendeen', 'Pharaoh_hound', 'Plott', 'Pointer', 'Pomeranian', 'Poodle', 'Portuguese_water_dog', 'Saint_bernard', 'Silky_terrier', 'Smooth_fox_terrier', 'Tibetan_mastiff', 'Welsh_springer_spaniel', 'Wirehaired_pointing_griffon', 'Xoloitzcuintli', 'Yorkshire_terrier']
In [11]:
display_dog_class_balance(AIND_DOG_DIR)

As possible to observe there is a slight inbalance in class distribution.

Preview of few images:

In [12]:
display_random_image_grid(dog_train[0])

Spotted potential issues:

  1. Many dogs on the same image.

    This is not big problem but there are dogs of similar breeds - for example German Pinscher and Doberman are similar. Depending on how image is taken and what dogs do, models can get confused. Because of that ideal case is when there is a single dog on image and looking upfront.
  1. Dogs of more than one breed on the same image.

    This is huge problem. For example image `Irish_wolfhound_06083.jpg` shows dog of Irish Wolfhound breed that is playing with two other smaller dogs of the same breed. Model won't have idea which dog is the most important one. It might lead to bias where model is mixing predictions for dogs of the same breed and problems with model generalization.
  1. People together with dogs.

    System should be able to differentiate between dogs and humans. If there is a dog and human on the same image then there might be problem with handling further logic. There is no way to predict dog-breed for both separately.

Loading: aind_human_images

There are no labels provided for human images.

In [13]:
human_train, human_val, human_test = load_aind_human_data(AIND_HUMAN_URL, AIND_HUMAN_DIR)
Path 'data/aind_human_images' found, loading cached data.
In [14]:
print("Image number in datasets:")
for label, data in zip(["train", "val", "test"], [human_train, human_val, human_test]):
    print(" - {}: {}".format(label, data.shape[0]))
Image number in datasets:
 - train: 10586
 - val: 1323
 - test: 1324

Preview of few images:

In [15]:
display_random_image_grid(human_train)

Human images won't be used much in this project. Just for face detection. For sure when detecting specific humans, there might be issue because there are sometimes many people on the same image.

Loading : test_images

In [16]:
test_data = load_test_images(TEST_IMAGES_URL, TEST_IMAGES_DIR)
Path 'data/test_images' found, loading cached data.
In [17]:
for img_path in os.listdir(TEST_IMAGES_DIR):
    print(img_path)
nondog9.jpg
nondog3.jpg
dog1_dalmatian.jpg
dog2_pug.jpg
nondog2.jpg
nondog10.png
nondog5.jpeg
nondog4.jpeg
nondog6.png
person2.jpg
person1.jpg
person4.png
nondog8.jpg
dog3_doberman.jpg
person3.png
person5.jpg
nondog7.jpeg
dog5_siberian_husky.jpg
dog4_golden_retriever.jpg
nondog1.jpg

Preview of few images:

In [18]:
display_random_image_grid(test_data, rows=4, cols=4)

Randomly picked image for web might be troublesome for the system when it comes to dog breed prediction. Picked image of Pug and Doberman doesnt show whole dog body. It might get mistaken for smiliar breeds.

Finding human detector

OpenCV Haar Cascade based human predictor

OpenCV provides already pretrained detectors of shapes within an image. Those detectors are named Haar Cascades. Few of them supports detecting human faces within an image. They slide a scanning window over the image in order to find shape which is known to them. It could be used instead of Neural Network, as when human face is found on image then model could return human label or none label otherwise.

In [19]:
def build_haar_cascade_human_predictor(cascade_path):
    """Builds a Predictor class encapsulating haar cascade.
    
    Parameters:
    -----------
    cascade_path: str
        Path to haar cascade .xml file.
        
    Returns:
    -----------
    predictor: Predictor
        Class capable of making prediction with model and it preprocessing functions
        stored inside kwargs.

    """
    def _human_predictor(img_path, **kwargs):
        """Function for detecting humans with haar cascade.
        
        Parameters:
        -----------
        img_path: str
            Path to image file.
        kwargs: **kwargs
            Container with model and related functions.
            
        Returns:
        -----------
        result: int
            Returns integer representing binary class (1: human, 0: no human).
        """
        clf = kwargs.get("cascade")
        to_tensor_function = kwargs.get("to_tensor")
       
        img = to_tensor_function(img_path)
        img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        
        result = clf.detectMultiScale(img_gray)
        return int(len(result) > 0)
        
    kwargs = {
        "cascade": cv2.CascadeClassifier(cascade_path),
        "to_tensor": cv2.imread
    }
    
    label = cascade_path.split(os.path.sep)[-1][:-4]
    predictor = Predictor(_human_predictor, label, **kwargs)
   
    return predictor
  • Building predictors
In [20]:
for cascade_name, cascade_url in OPENCV_HAAR_CASCADES.items():
    cascade_filepath = os.path.join(OPENCV_HAAR_CASCADES_DIR, cascade_name)
    download_haar_cascade(cascade_url, cascade_filepath)

    predictor = build_haar_cascade_human_predictor(cascade_filepath)
    
    predictors[PREDICTOR_TYPE_HUMAN][predictor.label] = predictor
File 'haarcascades/haarcascade_frontalcatface.xml' already exists.
File 'haarcascades/haarcascade_frontalcatface_extended.xml' already exists.
File 'haarcascades/haarcascade_frontalface_alt.xml' already exists.
File 'haarcascades/haarcascade_frontalface_alt2.xml' already exists.
File 'haarcascades/haarcascade_frontalface_alt_tree.xml' already exists.
File 'haarcascades/haarcascade_frontalface_default.xml' already exists.

Compare human predictors

In [21]:
test_binary_predictors(
    predictors[PREDICTOR_TYPE_HUMAN], human_train[:500], dog_train[0][:500]
)
                        haarcascade_frontalcatface: 100%|██████████| 500/500 [00:07<00:00, 71.41it/s]
                        haarcascade_frontalcatface: 100%|██████████| 500/500 [00:27<00:00, 17.95it/s]
               haarcascade_frontalcatface_extended: 100%|██████████| 500/500 [00:07<00:00, 70.44it/s]
               haarcascade_frontalcatface_extended: 100%|██████████| 500/500 [00:30<00:00, 16.18it/s]
                       haarcascade_frontalface_alt: 100%|██████████| 500/500 [00:07<00:00, 64.34it/s]
                       haarcascade_frontalface_alt: 100%|██████████| 500/500 [00:24<00:00, 12.24it/s]
                      haarcascade_frontalface_alt2: 100%|██████████| 500/500 [00:07<00:00, 70.43it/s]
                      haarcascade_frontalface_alt2: 100%|██████████| 500/500 [00:23<00:00, 12.45it/s]
                  haarcascade_frontalface_alt_tree: 100%|██████████| 500/500 [00:05<00:00, 99.33it/s] 
                  haarcascade_frontalface_alt_tree: 100%|██████████| 500/500 [00:17<00:00, 28.12it/s]
                   haarcascade_frontalface_default: 100%|██████████| 500/500 [00:08<00:00, 59.52it/s]
                   haarcascade_frontalface_default: 100%|██████████| 500/500 [00:22<00:00, 14.36it/s]
Out[21]:
accuracy f1_score true_positive true_negative false_positive false_negative
haarcascade_frontalcatface 0.503 0.234206 76 424 427 73
haarcascade_frontalcatface_extended 0.481 0.142149 43 457 438 62
haarcascade_frontalface_alt 0.935 0.938505 496 4 439 61
haarcascade_frontalface_alt2 0.883 0.894118 494 6 389 111
haarcascade_frontalface_alt_tree 0.741 0.657860 249 251 492 8
haarcascade_frontalface_default 0.725 0.783975 499 1 226 274

Finding dog predictor

ResNet50 based dog predictor

ResNet50 is large neural network which is currently pre-trained and capable of categorizing dog breeds.

Because format of the model is already set, function preprocess_input needs to be used on image:

First, the RGB image is converted to BGR by reordering the channels.  All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image.

Then model returns softmax probability on each of its outputs. Number of outputs equals to number of classes (1000) which it was taught. Class details can be found here: https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a.

It is possible to observe that dog indices are located between values 151 and 268.

In [22]:
def build_resnet50_dog_predictor():
    """Builds a Predictor class encapsulating pre-trained ResNet50 model.
    
    Parameters:
    -----------
    None
        
    Returns:
    -----------
    predictor: Predictor
        Class capable of making prediction with model and it preprocessing functions
        stored inside kwargs.

    """
    
    def _dog_predictor(img_path, **kwargs):
        """Function for detecting dogs with ResNet 50 model.
        
        Parameters:
        -----------
        img_path: str
            Path to image file.
        kwargs: **kwargs
            Container with model and related functions.
            
        Returns:
        -----------
        result: int
            Returns integer representing binary class (1: dog, 0: no dog).
        """
        clf = kwargs.get("model")
        to_tensor_function = kwargs.get("to_tensor")
        preprocess_input_function = kwargs.get("preprocess_input")
         
        img = to_tensor_function(img_path)
        img = preprocess_input_function(img)
        
        result = np.argmax(clf.predict(img))
        
        return int((result <= 268) and (result >= 151)) 
        
    kwargs = {
        "model": ResNet50(weights="imagenet"),   
        "to_tensor": path_to_tensor,
        "preprocess_input": resnet50_preprocess_input
    }
    
    label = "resnet50"
    predictor = Predictor(_dog_predictor, label, **kwargs)
   
    return predictor
  • Build predictor
In [23]:
predictor = build_resnet50_dog_predictor()
predictors[PREDICTOR_TYPE_DOG][predictor.label] = predictor

Compare dog predictors

In [24]:
test_binary_predictors(
    predictors[PREDICTOR_TYPE_DOG], dog_train[0][:500], human_train[:500]
)
                                          resnet50: 100%|██████████| 500/500 [00:11<00:00, 43.82it/s]
                                          resnet50: 100%|██████████| 500/500 [00:06<00:00, 75.77it/s]
Out[24]:
accuracy f1_score true_positive true_negative false_positive false_negative
resnet50 0.988 0.987952 492 8 496 4

Finding dog-breed predictor

Dog-breed predictor from scratch

Using available data in order to build dog-breed predictor.

In this section, following ideas will be tried:

  1. Building CNN from sctrach

    Training model from scratch just on available dog-breed images. It is expected to be a baseline for this problem. It is a neural network of convolutional type. It scans image with kernels, where each kernel cell has weight attached to it that is trainable. Thanks to that neural network is capable of training it's own filters and decide what is the best way to look at the image. Convolutional layers are stacked in pairs, as it allows neural network to perform feature extraction twice. First convolution picks pixels which are activating output the most and second one selects the best of the best. Between convolution pairs maxpooling technique is used with usage of kernel size (2,2). It scans convolution results with small matrix of size 2x2 and picks pixel with the highest value. Picked architecture is a trade of between efficiency and training time. To achieve that dense layers are ommited as convolutional layers requires less features to train.

  1. Finetuning large neural networks of known architectures

    There are already pre-trained large neural networks such as VGG16, VGG19, ResNet50, Exception, InceptionV3 and much more. As all of those architectures are meant to work with images they contain already weight which know how to look at images. What is done here is transfer learning by finetuning. Whole convolutional part, containing image knowledge, is frozen and weight are left unchanged. Input image goes through the frozen part and gives an output that becomes an input to attached trainable layers and different outputs. That way "frozen knowledge" within already pre-trained network is used to discriminate between image and trainable parts are responsible only for making conclusions about data.

In [25]:
def build_custom_cnn_dogbreed_predictor(dog_train, dog_val):
    """Builds a Predictor class encapsulating CNN model.
    
    Parameters:
    -----------
    dog_train: tuple
        Container with train images filepaths and train targets.
    dog_val: tuple
        Container with validation images filepaths and validation targets.
        
    Returns:
    -----------
    predictor: Predictor
        Class capable of making prediction with model and it preprocessing functions
        stored inside kwargs.
    """  
    def _dogbreed_predictor(img_path, **kwargs):
        """Function for detecting dog-breed with CNN model.
        
        Parameters:
        -----------
        img_path: str
            Path to image file.
        kwargs: **kwargs
            Container with model and related functions.
            
        Returns:
        -----------
        result: int
            Returns integer representing class.
        """
        clf = kwargs.get("model")
        to_tensor_function = kwargs.get("to_tensor")
        preprocess_input_fucntion = kwargs.get("preprocess_input")
        
        img = to_tensor_function(img_path)
        img = preprocess_input_fucntion(img)
        
        result = np.argmax(clf.predict(img))
        
        return result
    
    def _preprocess_input(img):
        """Function for scaling pixel values to <0.0, 1.0> range. 
        
        Parameters:
        -----------
        img: numpy.ndarray
            Array with RGB images, with pixel values in range <0.0, 255.0>.
            
        Returns:
        -----------
        img: numpy.ndarray
            Scaled image.
        """
        return img.astype("float32") / 255.0
    
    def _train_model(dog_train_i, dog_train_t, dog_val_i, dog_val_t):
        """Function for building, compiling and training custom CNN model.
        It saves best model to SAVED_MODELS_DIR.
        
        Parameters:
        -----------
        dog_train_i: numpy.ndarray
            Array with loaded train images into numpy.ndarray format.
        dog_train_t: numpy.ndarray
            Array with one-hot encoded train targets.
        dog_val_i: numpy.ndarray
            Array with loaded validation images into numpy.ndarray format.
        dog_val_t: numpy.ndarray
            Array with one-hot encoded validation targets.
            
        Returns:
        -----------
        model: Sequential
            Trained model with loaded weights from best epoch.
        history: dict
            Dictionary containing data about model training process.
        """
        model = Sequential()
        model.add(Dense(input_shape=(224, 224, 3), units=16))

        model.add(Conv2D(filters=16, kernel_size=2, strides=1, 
                         padding="valid", activation="relu"))
        model.add(Conv2D(filters=16, kernel_size=2, strides=1, 
                         padding="valid", activation="relu"))
        model.add(MaxPool2D(pool_size=(2), strides=(2), padding="valid"))
        model.add(Dropout(0.2))

        model.add(Conv2D(filters=32, kernel_size=2, strides=1, 
                         padding="valid", activation="relu"))
        model.add(Conv2D(filters=32, kernel_size=2, strides=1, 
                         padding="valid", activation="relu"))
        model.add(MaxPool2D(pool_size=(2), strides=(2), padding="valid"))
        model.add(Dropout(0.2))
        
        model.add(Conv2D(filters=32, kernel_size=2, strides=1, 
                         padding="valid", activation="relu"))
        model.add(Conv2D(filters=32, kernel_size=2, strides=1, 
                         padding="valid", activation="relu"))
        model.add(MaxPool2D(pool_size=(2), strides=(2), padding="valid"))
        model.add(Dropout(0.2))

        model.add(Conv2D(filters=64, kernel_size=2, strides=1, 
                         padding="valid", activation="relu"))
        model.add(Conv2D(filters=64, kernel_size=2, strides=1, 
                         padding="valid", activation="relu"))
        model.add(MaxPool2D(pool_size=(2), strides=(2), padding="same"))
        
        model.add(GlobalAveragePooling2D())
        model.add(Dense(units=AIND_DOG_CLASSES_NUM, activation="softmax"))

        model.summary()
        
        model.compile(
            optimizer=RMSprop(0.002), loss="categorical_crossentropy", metrics=["accuracy"]
        )

        best_model_filepath = os.path.join(SAVED_MODELS_DIR, "custom_cnn_dogbreed.h5")
        checkpointer = ModelCheckpoint(
            filepath=best_model_filepath, verbose=1, save_best_only=True, 
            save_weights_only=False
        )

        history = model.fit(
            dog_train_i, dog_train_t, 
            validation_data=(dog_val_i, dog_val_t),
            epochs=200, batch_size=64, callbacks=[checkpointer], verbose=1
        )

        model = load_model(best_model_filepath, compile=False)
        
        return history, model
        
    dog_train_i, dog_train_t = dog_train
    dog_val_i, dog_val_t = dog_val
    
    print("Converting dog-breed train images to tensor...")
    time.sleep(0.25)
    dog_train_i = paths_to_tensor(dog_train_i)
    
    print("Converting dog-breed validation images to tensor...")
    time.sleep(0.25)
    dog_val_i = paths_to_tensor(dog_val_i)
    
    print("Normalizing dog-breed train data...")
    dog_train_i = _preprocess_input(dog_train_i)
    
    print("Normalizing dog-breed val data...")
    dog_val_i = _preprocess_input(dog_val_i)
    
    print("Training model...")
    history, model = _train_model(dog_train_i, dog_train_t, dog_val_i, dog_val_t)
    label = "custom_cnn_dogbreed"
    display_training_history(history, label)

    print("Constructing predictor...")
    kwargs = {
        "model": model,
        "to_tensor": path_to_tensor,
        "preprocess_input": _preprocess_input
    }
    
    predictor = Predictor(_dogbreed_predictor, label, **kwargs)
    
    return predictor
  • Build predictor
In [36]:
predictor = build_custom_cnn_dogbreed_predictor(dog_train, dog_val)
predictors[PREDICTOR_TYPE_DOG_BREED][predictor.label] = predictor
Converting dog-breed train images to tensor...
100%|██████████| 6680/6680 [00:46<00:00, 142.39it/s]
Converting dog-breed validation images to tensor...
100%|██████████| 835/835 [00:08<00:00, 99.64it/s] 
Normalizing dog-breed train data...
Normalizing dog-breed val data...
Training model...
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_3 (Dense)              (None, 224, 224, 16)      64        
_________________________________________________________________
conv2d_103 (Conv2D)          (None, 223, 223, 16)      1040      
_________________________________________________________________
conv2d_104 (Conv2D)          (None, 222, 222, 16)      1040      
_________________________________________________________________
max_pooling2d_11 (MaxPooling (None, 111, 111, 16)      0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 111, 111, 16)      0         
_________________________________________________________________
conv2d_105 (Conv2D)          (None, 110, 110, 32)      2080      
_________________________________________________________________
conv2d_106 (Conv2D)          (None, 109, 109, 32)      4128      
_________________________________________________________________
max_pooling2d_12 (MaxPooling (None, 54, 54, 32)        0         
_________________________________________________________________
dropout_5 (Dropout)          (None, 54, 54, 32)        0         
_________________________________________________________________
conv2d_107 (Conv2D)          (None, 53, 53, 32)        4128      
_________________________________________________________________
conv2d_108 (Conv2D)          (None, 52, 52, 32)        4128      
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 26, 26, 32)        0         
_________________________________________________________________
dropout_6 (Dropout)          (None, 26, 26, 32)        0         
_________________________________________________________________
conv2d_109 (Conv2D)          (None, 25, 25, 64)        8256      
_________________________________________________________________
conv2d_110 (Conv2D)          (None, 24, 24, 64)        16448     
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 12, 12, 64)        0         
_________________________________________________________________
global_average_pooling2d_2 ( (None, 64)                0         
_________________________________________________________________
dense_4 (Dense)              (None, 133)               8645      
=================================================================
Total params: 49,957
Trainable params: 49,957
Non-trainable params: 0
_________________________________________________________________
Train on 6680 samples, validate on 835 samples
Epoch 1/200
6680/6680 [==============================] - 23s 3ms/step - loss: 4.8864 - acc: 0.0102 - val_loss: 4.8789 - val_acc: 0.0108

Epoch 00001: val_loss improved from inf to 4.87888, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 2/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.8723 - acc: 0.0115 - val_loss: 4.8689 - val_acc: 0.0108

Epoch 00002: val_loss improved from 4.87888 to 4.86893, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 3/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.8690 - acc: 0.0108 - val_loss: 4.8747 - val_acc: 0.0120

Epoch 00003: val_loss did not improve from 4.86893
Epoch 4/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.8608 - acc: 0.0108 - val_loss: 4.8496 - val_acc: 0.0192

Epoch 00004: val_loss improved from 4.86893 to 4.84961, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 5/200
6680/6680 [==============================] - 17s 3ms/step - loss: 4.8126 - acc: 0.0154 - val_loss: 4.8193 - val_acc: 0.0180

Epoch 00005: val_loss improved from 4.84961 to 4.81925, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 6/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.7394 - acc: 0.0226 - val_loss: 4.7553 - val_acc: 0.0204

Epoch 00006: val_loss improved from 4.81925 to 4.75526, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 7/200
6680/6680 [==============================] - 17s 3ms/step - loss: 4.6846 - acc: 0.0287 - val_loss: 4.9047 - val_acc: 0.0132

Epoch 00007: val_loss did not improve from 4.75526
Epoch 8/200
6680/6680 [==============================] - 17s 3ms/step - loss: 4.6308 - acc: 0.0274 - val_loss: 4.6669 - val_acc: 0.0228

Epoch 00008: val_loss improved from 4.75526 to 4.66685, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 9/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.5914 - acc: 0.0296 - val_loss: 4.7985 - val_acc: 0.0204

Epoch 00009: val_loss did not improve from 4.66685
Epoch 10/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.5491 - acc: 0.0323 - val_loss: 4.6626 - val_acc: 0.0275

Epoch 00010: val_loss improved from 4.66685 to 4.66264, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 11/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.5057 - acc: 0.0391 - val_loss: 4.8790 - val_acc: 0.0216

Epoch 00011: val_loss did not improve from 4.66264
Epoch 12/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.4728 - acc: 0.0412 - val_loss: 4.5201 - val_acc: 0.0347

Epoch 00012: val_loss improved from 4.66264 to 4.52014, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 13/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.4245 - acc: 0.0413 - val_loss: 4.6798 - val_acc: 0.0216

Epoch 00013: val_loss did not improve from 4.52014
Epoch 14/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.3802 - acc: 0.0473 - val_loss: 4.4363 - val_acc: 0.0311

Epoch 00014: val_loss improved from 4.52014 to 4.43630, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 15/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.3437 - acc: 0.0496 - val_loss: 4.4857 - val_acc: 0.0467

Epoch 00015: val_loss did not improve from 4.43630
Epoch 16/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.3106 - acc: 0.0533 - val_loss: 4.3380 - val_acc: 0.0527

Epoch 00016: val_loss improved from 4.43630 to 4.33803, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 17/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.2625 - acc: 0.0567 - val_loss: 4.5187 - val_acc: 0.0491

Epoch 00017: val_loss did not improve from 4.33803
Epoch 18/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.2353 - acc: 0.0621 - val_loss: 4.3376 - val_acc: 0.0587

Epoch 00018: val_loss improved from 4.33803 to 4.33755, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 19/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.1855 - acc: 0.0672 - val_loss: 4.4332 - val_acc: 0.0419

Epoch 00019: val_loss did not improve from 4.33755
Epoch 20/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.1566 - acc: 0.0719 - val_loss: 4.3028 - val_acc: 0.0707

Epoch 00020: val_loss improved from 4.33755 to 4.30278, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 21/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.1298 - acc: 0.0689 - val_loss: 4.2778 - val_acc: 0.0599

Epoch 00021: val_loss improved from 4.30278 to 4.27780, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 22/200
6680/6680 [==============================] - 16s 2ms/step - loss: 4.0965 - acc: 0.0811 - val_loss: 4.5060 - val_acc: 0.0407

Epoch 00022: val_loss did not improve from 4.27780
Epoch 23/200
6680/6680 [==============================] - 17s 2ms/step - loss: 4.0644 - acc: 0.0864 - val_loss: 4.3417 - val_acc: 0.0539

Epoch 00023: val_loss did not improve from 4.27780
Epoch 24/200
6680/6680 [==============================] - 17s 3ms/step - loss: 4.0141 - acc: 0.0897 - val_loss: 4.2727 - val_acc: 0.0695

Epoch 00024: val_loss improved from 4.27780 to 4.27268, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 25/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.9834 - acc: 0.0915 - val_loss: 4.2697 - val_acc: 0.0766

Epoch 00025: val_loss improved from 4.27268 to 4.26973, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 26/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.9692 - acc: 0.0987 - val_loss: 4.2834 - val_acc: 0.0743

Epoch 00026: val_loss did not improve from 4.26973
Epoch 27/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.9465 - acc: 0.0969 - val_loss: 4.1675 - val_acc: 0.0754

Epoch 00027: val_loss improved from 4.26973 to 4.16752, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 28/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.8985 - acc: 0.1022 - val_loss: 4.0537 - val_acc: 0.0922

Epoch 00028: val_loss improved from 4.16752 to 4.05371, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 29/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.8672 - acc: 0.1043 - val_loss: 4.0967 - val_acc: 0.0790

Epoch 00029: val_loss did not improve from 4.05371
Epoch 30/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.8513 - acc: 0.1118 - val_loss: 4.0130 - val_acc: 0.0874

Epoch 00030: val_loss improved from 4.05371 to 4.01302, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 31/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.8169 - acc: 0.1219 - val_loss: 4.1113 - val_acc: 0.0838

Epoch 00031: val_loss did not improve from 4.01302
Epoch 32/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.7673 - acc: 0.1266 - val_loss: 4.1574 - val_acc: 0.0850

Epoch 00032: val_loss did not improve from 4.01302
Epoch 33/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.7478 - acc: 0.1310 - val_loss: 3.9911 - val_acc: 0.0982

Epoch 00033: val_loss improved from 4.01302 to 3.99113, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 34/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.7104 - acc: 0.1329 - val_loss: 3.9329 - val_acc: 0.0958

Epoch 00034: val_loss improved from 3.99113 to 3.93295, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 35/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.6866 - acc: 0.1389 - val_loss: 4.0402 - val_acc: 0.0898

Epoch 00035: val_loss did not improve from 3.93295
Epoch 36/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.6503 - acc: 0.1418 - val_loss: 4.0783 - val_acc: 0.0910

Epoch 00036: val_loss did not improve from 3.93295
Epoch 37/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.6254 - acc: 0.1491 - val_loss: 4.3367 - val_acc: 0.0623

Epoch 00037: val_loss did not improve from 3.93295
Epoch 38/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.5941 - acc: 0.1528 - val_loss: 3.9375 - val_acc: 0.1305

Epoch 00038: val_loss did not improve from 3.93295
Epoch 39/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.5636 - acc: 0.1626 - val_loss: 3.9744 - val_acc: 0.0958

Epoch 00039: val_loss did not improve from 3.93295
Epoch 40/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.5445 - acc: 0.1552 - val_loss: 3.9836 - val_acc: 0.1162

Epoch 00040: val_loss did not improve from 3.93295
Epoch 41/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.5219 - acc: 0.1689 - val_loss: 3.9303 - val_acc: 0.1138

Epoch 00041: val_loss improved from 3.93295 to 3.93025, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 42/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.4952 - acc: 0.1678 - val_loss: 3.9995 - val_acc: 0.0994

Epoch 00042: val_loss did not improve from 3.93025
Epoch 43/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.4697 - acc: 0.1735 - val_loss: 3.8652 - val_acc: 0.1353

Epoch 00043: val_loss improved from 3.93025 to 3.86521, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 44/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.4250 - acc: 0.1847 - val_loss: 3.9638 - val_acc: 0.1281

Epoch 00044: val_loss did not improve from 3.86521
Epoch 45/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.4121 - acc: 0.1847 - val_loss: 3.8959 - val_acc: 0.1198

Epoch 00045: val_loss did not improve from 3.86521
Epoch 46/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.3862 - acc: 0.1868 - val_loss: 3.7783 - val_acc: 0.1293

Epoch 00046: val_loss improved from 3.86521 to 3.77827, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 47/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.3727 - acc: 0.1942 - val_loss: 3.7800 - val_acc: 0.1329

Epoch 00047: val_loss did not improve from 3.77827
Epoch 48/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.3433 - acc: 0.1931 - val_loss: 3.8179 - val_acc: 0.1234

Epoch 00048: val_loss did not improve from 3.77827
Epoch 49/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.3250 - acc: 0.2010 - val_loss: 3.8113 - val_acc: 0.1533

Epoch 00049: val_loss did not improve from 3.77827
Epoch 50/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.2953 - acc: 0.2048 - val_loss: 3.8348 - val_acc: 0.1305

Epoch 00050: val_loss did not improve from 3.77827
Epoch 51/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.2579 - acc: 0.2163 - val_loss: 3.9698 - val_acc: 0.1281

Epoch 00051: val_loss did not improve from 3.77827
Epoch 52/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.2334 - acc: 0.2196 - val_loss: 3.7396 - val_acc: 0.1305

Epoch 00052: val_loss improved from 3.77827 to 3.73965, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 53/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.2152 - acc: 0.2115 - val_loss: 4.0204 - val_acc: 0.0910

Epoch 00053: val_loss did not improve from 3.73965
Epoch 54/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.1935 - acc: 0.2314 - val_loss: 3.8828 - val_acc: 0.1210

Epoch 00054: val_loss did not improve from 3.73965
Epoch 55/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.1805 - acc: 0.2281 - val_loss: 3.8668 - val_acc: 0.1257

Epoch 00055: val_loss did not improve from 3.73965
Epoch 56/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.1537 - acc: 0.2281 - val_loss: 3.7243 - val_acc: 0.1497

Epoch 00056: val_loss improved from 3.73965 to 3.72432, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 57/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.1261 - acc: 0.2430 - val_loss: 3.9695 - val_acc: 0.1162

Epoch 00057: val_loss did not improve from 3.72432
Epoch 58/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.1025 - acc: 0.2380 - val_loss: 3.8037 - val_acc: 0.1461

Epoch 00058: val_loss did not improve from 3.72432
Epoch 59/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.0816 - acc: 0.2503 - val_loss: 3.8659 - val_acc: 0.1509

Epoch 00059: val_loss did not improve from 3.72432
Epoch 60/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.0788 - acc: 0.2445 - val_loss: 3.7160 - val_acc: 0.1605

Epoch 00060: val_loss improved from 3.72432 to 3.71602, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 61/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.0391 - acc: 0.2536 - val_loss: 3.7347 - val_acc: 0.1629

Epoch 00061: val_loss did not improve from 3.71602
Epoch 62/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.0161 - acc: 0.2578 - val_loss: 3.9257 - val_acc: 0.1234

Epoch 00062: val_loss did not improve from 3.71602
Epoch 63/200
6680/6680 [==============================] - 16s 2ms/step - loss: 3.0107 - acc: 0.2614 - val_loss: 3.8146 - val_acc: 0.1677

Epoch 00063: val_loss did not improve from 3.71602
Epoch 64/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.9951 - acc: 0.2627 - val_loss: 3.7758 - val_acc: 0.1473

Epoch 00064: val_loss did not improve from 3.71602
Epoch 65/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.9662 - acc: 0.2675 - val_loss: 3.8136 - val_acc: 0.1617

Epoch 00065: val_loss did not improve from 3.71602
Epoch 66/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.9337 - acc: 0.2713 - val_loss: 3.9185 - val_acc: 0.1293

Epoch 00066: val_loss did not improve from 3.71602
Epoch 67/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.9168 - acc: 0.2814 - val_loss: 3.9887 - val_acc: 0.1246

Epoch 00067: val_loss did not improve from 3.71602
Epoch 68/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.8888 - acc: 0.2864 - val_loss: 3.8281 - val_acc: 0.1317

Epoch 00068: val_loss did not improve from 3.71602
Epoch 69/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.8604 - acc: 0.2927 - val_loss: 3.5944 - val_acc: 0.1665

Epoch 00069: val_loss improved from 3.71602 to 3.59438, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 70/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.8583 - acc: 0.2904 - val_loss: 3.6701 - val_acc: 0.1605

Epoch 00070: val_loss did not improve from 3.59438
Epoch 71/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.8549 - acc: 0.2948 - val_loss: 3.9664 - val_acc: 0.1521

Epoch 00071: val_loss did not improve from 3.59438
Epoch 72/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.8119 - acc: 0.2963 - val_loss: 3.5895 - val_acc: 0.1832

Epoch 00072: val_loss improved from 3.59438 to 3.58950, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 73/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.7956 - acc: 0.3009 - val_loss: 3.8423 - val_acc: 0.1725

Epoch 00073: val_loss did not improve from 3.58950
Epoch 74/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.7896 - acc: 0.3087 - val_loss: 3.6574 - val_acc: 0.1832

Epoch 00074: val_loss did not improve from 3.58950
Epoch 75/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.7687 - acc: 0.3108 - val_loss: 3.7695 - val_acc: 0.1784

Epoch 00075: val_loss did not improve from 3.58950
Epoch 76/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.7452 - acc: 0.3103 - val_loss: 3.8964 - val_acc: 0.1545

Epoch 00076: val_loss did not improve from 3.58950
Epoch 77/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.7105 - acc: 0.3254 - val_loss: 3.8079 - val_acc: 0.1461

Epoch 00077: val_loss did not improve from 3.58950
Epoch 78/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.7259 - acc: 0.3148 - val_loss: 4.0837 - val_acc: 0.1749

Epoch 00078: val_loss did not improve from 3.58950
Epoch 79/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.6829 - acc: 0.3263 - val_loss: 3.8424 - val_acc: 0.1749

Epoch 00079: val_loss did not improve from 3.58950
Epoch 80/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.6718 - acc: 0.3283 - val_loss: 4.0233 - val_acc: 0.1713

Epoch 00080: val_loss did not improve from 3.58950
Epoch 81/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.6478 - acc: 0.3316 - val_loss: 4.0409 - val_acc: 0.1305

Epoch 00081: val_loss did not improve from 3.58950
Epoch 82/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.6465 - acc: 0.3364 - val_loss: 3.8099 - val_acc: 0.1856

Epoch 00082: val_loss did not improve from 3.58950
Epoch 83/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.6110 - acc: 0.3416 - val_loss: 3.6996 - val_acc: 0.1772

Epoch 00083: val_loss did not improve from 3.58950
Epoch 84/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.6113 - acc: 0.3415 - val_loss: 3.6273 - val_acc: 0.1808

Epoch 00084: val_loss did not improve from 3.58950
Epoch 85/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.5833 - acc: 0.3361 - val_loss: 3.7537 - val_acc: 0.1820

Epoch 00085: val_loss did not improve from 3.58950
Epoch 86/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.5891 - acc: 0.3422 - val_loss: 3.6767 - val_acc: 0.1856

Epoch 00086: val_loss did not improve from 3.58950
Epoch 87/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.5560 - acc: 0.3500 - val_loss: 3.5830 - val_acc: 0.2228

Epoch 00087: val_loss improved from 3.58950 to 3.58296, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 88/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.5300 - acc: 0.3576 - val_loss: 3.9200 - val_acc: 0.1784

Epoch 00088: val_loss did not improve from 3.58296
Epoch 89/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.5290 - acc: 0.3599 - val_loss: 3.8305 - val_acc: 0.2048

Epoch 00089: val_loss did not improve from 3.58296
Epoch 90/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.5071 - acc: 0.3639 - val_loss: 3.6165 - val_acc: 0.2096

Epoch 00090: val_loss did not improve from 3.58296
Epoch 91/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.4846 - acc: 0.3689 - val_loss: 3.6788 - val_acc: 0.1928

Epoch 00091: val_loss did not improve from 3.58296
Epoch 92/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.4924 - acc: 0.3630 - val_loss: 3.5993 - val_acc: 0.1892

Epoch 00092: val_loss did not improve from 3.58296
Epoch 93/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.4569 - acc: 0.3698 - val_loss: 4.0826 - val_acc: 0.1904

Epoch 00093: val_loss did not improve from 3.58296
Epoch 94/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.4466 - acc: 0.3740 - val_loss: 3.5672 - val_acc: 0.2096

Epoch 00094: val_loss improved from 3.58296 to 3.56715, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 95/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.4357 - acc: 0.3777 - val_loss: 3.7805 - val_acc: 0.1892

Epoch 00095: val_loss did not improve from 3.56715
Epoch 96/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.4182 - acc: 0.3760 - val_loss: 4.0962 - val_acc: 0.1713

Epoch 00096: val_loss did not improve from 3.56715
Epoch 97/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.4069 - acc: 0.3835 - val_loss: 3.8642 - val_acc: 0.2072

Epoch 00097: val_loss did not improve from 3.56715
Epoch 98/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.4043 - acc: 0.3855 - val_loss: 3.5982 - val_acc: 0.1952

Epoch 00098: val_loss did not improve from 3.56715
Epoch 99/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.3801 - acc: 0.3826 - val_loss: 3.9585 - val_acc: 0.1976

Epoch 00099: val_loss did not improve from 3.56715
Epoch 100/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.3792 - acc: 0.3871 - val_loss: 3.6416 - val_acc: 0.1868

Epoch 00100: val_loss did not improve from 3.56715
Epoch 101/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.3551 - acc: 0.3922 - val_loss: 3.8697 - val_acc: 0.1976

Epoch 00101: val_loss did not improve from 3.56715
Epoch 102/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.3381 - acc: 0.3943 - val_loss: 3.9177 - val_acc: 0.2060

Epoch 00102: val_loss did not improve from 3.56715
Epoch 103/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.3379 - acc: 0.4040 - val_loss: 3.8681 - val_acc: 0.2263

Epoch 00103: val_loss did not improve from 3.56715
Epoch 104/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.3091 - acc: 0.4031 - val_loss: 3.8087 - val_acc: 0.2048

Epoch 00104: val_loss did not improve from 3.56715
Epoch 105/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.3097 - acc: 0.3961 - val_loss: 3.5754 - val_acc: 0.2132

Epoch 00105: val_loss did not improve from 3.56715
Epoch 106/200
6680/6680 [==============================] - 17s 3ms/step - loss: 2.2803 - acc: 0.4067 - val_loss: 3.9141 - val_acc: 0.1820

Epoch 00106: val_loss did not improve from 3.56715
Epoch 107/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.2750 - acc: 0.4051 - val_loss: 3.8227 - val_acc: 0.2216

Epoch 00107: val_loss did not improve from 3.56715
Epoch 108/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.2584 - acc: 0.4114 - val_loss: 3.9078 - val_acc: 0.1928

Epoch 00108: val_loss did not improve from 3.56715
Epoch 109/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.2445 - acc: 0.4195 - val_loss: 3.6999 - val_acc: 0.2323

Epoch 00109: val_loss did not improve from 3.56715
Epoch 110/200
6680/6680 [==============================] - 17s 2ms/step - loss: 2.2354 - acc: 0.4166 - val_loss: 3.6547 - val_acc: 0.2204

Epoch 00110: val_loss did not improve from 3.56715
Epoch 111/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.2451 - acc: 0.4132 - val_loss: 4.0070 - val_acc: 0.2048

Epoch 00111: val_loss did not improve from 3.56715
Epoch 112/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.2295 - acc: 0.4187 - val_loss: 3.4626 - val_acc: 0.2156

Epoch 00112: val_loss improved from 3.56715 to 3.46259, saving model to saved_models/custom_cnn_dogbreed.h5
Epoch 113/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.2021 - acc: 0.4253 - val_loss: 3.9759 - val_acc: 0.2228

Epoch 00113: val_loss did not improve from 3.46259
Epoch 114/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1945 - acc: 0.4256 - val_loss: 4.0492 - val_acc: 0.2299

Epoch 00114: val_loss did not improve from 3.46259
Epoch 115/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1921 - acc: 0.4286 - val_loss: 3.5803 - val_acc: 0.2251

Epoch 00115: val_loss did not improve from 3.46259
Epoch 116/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1686 - acc: 0.4301 - val_loss: 4.1029 - val_acc: 0.1713

Epoch 00116: val_loss did not improve from 3.46259
Epoch 117/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1658 - acc: 0.4328 - val_loss: 4.2176 - val_acc: 0.2228

Epoch 00117: val_loss did not improve from 3.46259
Epoch 118/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1618 - acc: 0.4335 - val_loss: 3.5851 - val_acc: 0.2251

Epoch 00118: val_loss did not improve from 3.46259
Epoch 119/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1558 - acc: 0.4352 - val_loss: 3.8860 - val_acc: 0.2144

Epoch 00119: val_loss did not improve from 3.46259
Epoch 120/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1352 - acc: 0.4313 - val_loss: 3.6268 - val_acc: 0.1952

Epoch 00120: val_loss did not improve from 3.46259
Epoch 121/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1370 - acc: 0.4347 - val_loss: 3.7412 - val_acc: 0.2180

Epoch 00121: val_loss did not improve from 3.46259
Epoch 122/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1266 - acc: 0.4487 - val_loss: 3.6165 - val_acc: 0.2299

Epoch 00122: val_loss did not improve from 3.46259
Epoch 123/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.1077 - acc: 0.4440 - val_loss: 3.5526 - val_acc: 0.2204

Epoch 00123: val_loss did not improve from 3.46259
Epoch 124/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.0843 - acc: 0.4454 - val_loss: 3.5943 - val_acc: 0.2323

Epoch 00124: val_loss did not improve from 3.46259
Epoch 125/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.0617 - acc: 0.4491 - val_loss: 4.0162 - val_acc: 0.2048

Epoch 00125: val_loss did not improve from 3.46259
Epoch 126/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.0651 - acc: 0.4504 - val_loss: 3.7108 - val_acc: 0.2347

Epoch 00126: val_loss did not improve from 3.46259
Epoch 127/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.0707 - acc: 0.4540 - val_loss: 3.5435 - val_acc: 0.2275

Epoch 00127: val_loss did not improve from 3.46259
Epoch 128/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.0302 - acc: 0.4638 - val_loss: 3.6813 - val_acc: 0.2371

Epoch 00128: val_loss did not improve from 3.46259
Epoch 129/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.0342 - acc: 0.4537 - val_loss: 3.7873 - val_acc: 0.2359

Epoch 00129: val_loss did not improve from 3.46259
Epoch 130/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.0178 - acc: 0.4635 - val_loss: 3.7475 - val_acc: 0.2251

Epoch 00130: val_loss did not improve from 3.46259
Epoch 131/200
6680/6680 [==============================] - 16s 2ms/step - loss: 2.0350 - acc: 0.4638 - val_loss: 3.9086 - val_acc: 0.2491

Epoch 00131: val_loss did not improve from 3.46259
Epoch 132/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9937 - acc: 0.4702 - val_loss: 3.9467 - val_acc: 0.2251

Epoch 00132: val_loss did not improve from 3.46259
Epoch 133/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9947 - acc: 0.4672 - val_loss: 3.8562 - val_acc: 0.2635

Epoch 00133: val_loss did not improve from 3.46259
Epoch 134/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9988 - acc: 0.4687 - val_loss: 3.6820 - val_acc: 0.2299

Epoch 00134: val_loss did not improve from 3.46259
Epoch 135/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9867 - acc: 0.4729 - val_loss: 3.8874 - val_acc: 0.2299

Epoch 00135: val_loss did not improve from 3.46259
Epoch 136/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9687 - acc: 0.4684 - val_loss: 3.7346 - val_acc: 0.2431

Epoch 00136: val_loss did not improve from 3.46259
Epoch 137/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9912 - acc: 0.4717 - val_loss: 3.6006 - val_acc: 0.2371

Epoch 00137: val_loss did not improve from 3.46259
Epoch 138/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9615 - acc: 0.4774 - val_loss: 4.1224 - val_acc: 0.2216

Epoch 00138: val_loss did not improve from 3.46259
Epoch 139/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9459 - acc: 0.4783 - val_loss: 3.7061 - val_acc: 0.2228

Epoch 00139: val_loss did not improve from 3.46259
Epoch 140/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9544 - acc: 0.4777 - val_loss: 3.8965 - val_acc: 0.2228

Epoch 00140: val_loss did not improve from 3.46259
Epoch 141/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9439 - acc: 0.4789 - val_loss: 4.0510 - val_acc: 0.2359

Epoch 00141: val_loss did not improve from 3.46259
Epoch 142/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9364 - acc: 0.4825 - val_loss: 3.7884 - val_acc: 0.2491

Epoch 00142: val_loss did not improve from 3.46259
Epoch 143/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9214 - acc: 0.4829 - val_loss: 3.9418 - val_acc: 0.2299

Epoch 00143: val_loss did not improve from 3.46259
Epoch 144/200
6680/6680 [==============================] - 17s 2ms/step - loss: 1.9218 - acc: 0.4888 - val_loss: 4.0181 - val_acc: 0.2144

Epoch 00144: val_loss did not improve from 3.46259
Epoch 145/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.9088 - acc: 0.4768 - val_loss: 4.1876 - val_acc: 0.2228

Epoch 00145: val_loss did not improve from 3.46259
Epoch 146/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8978 - acc: 0.4895 - val_loss: 4.1837 - val_acc: 0.2251

Epoch 00146: val_loss did not improve from 3.46259
Epoch 147/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8892 - acc: 0.4928 - val_loss: 4.0109 - val_acc: 0.2251

Epoch 00147: val_loss did not improve from 3.46259
Epoch 148/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8862 - acc: 0.4966 - val_loss: 4.2222 - val_acc: 0.2287

Epoch 00148: val_loss did not improve from 3.46259
Epoch 149/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8869 - acc: 0.4925 - val_loss: 3.5595 - val_acc: 0.2299

Epoch 00149: val_loss did not improve from 3.46259
Epoch 150/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8875 - acc: 0.4957 - val_loss: 4.1125 - val_acc: 0.2228

Epoch 00150: val_loss did not improve from 3.46259
Epoch 151/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8856 - acc: 0.4904 - val_loss: 4.0253 - val_acc: 0.2359

Epoch 00151: val_loss did not improve from 3.46259
Epoch 152/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8550 - acc: 0.4943 - val_loss: 3.7734 - val_acc: 0.2671

Epoch 00152: val_loss did not improve from 3.46259
Epoch 153/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8202 - acc: 0.5100 - val_loss: 3.8307 - val_acc: 0.2587

Epoch 00153: val_loss did not improve from 3.46259
Epoch 154/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8311 - acc: 0.5004 - val_loss: 3.6826 - val_acc: 0.2228

Epoch 00154: val_loss did not improve from 3.46259
Epoch 155/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8393 - acc: 0.5022 - val_loss: 3.6558 - val_acc: 0.2623

Epoch 00155: val_loss did not improve from 3.46259
Epoch 156/200
6680/6680 [==============================] - 17s 2ms/step - loss: 1.8196 - acc: 0.5042 - val_loss: 3.7495 - val_acc: 0.2216

Epoch 00156: val_loss did not improve from 3.46259
Epoch 157/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.8172 - acc: 0.5106 - val_loss: 3.8006 - val_acc: 0.2623

Epoch 00157: val_loss did not improve from 3.46259
Epoch 158/200
6680/6680 [==============================] - 17s 3ms/step - loss: 1.8223 - acc: 0.5040 - val_loss: 3.6889 - val_acc: 0.2635

Epoch 00158: val_loss did not improve from 3.46259
Epoch 159/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7916 - acc: 0.5072 - val_loss: 3.9203 - val_acc: 0.2419

Epoch 00159: val_loss did not improve from 3.46259
Epoch 160/200
6680/6680 [==============================] - 17s 2ms/step - loss: 1.8140 - acc: 0.5039 - val_loss: 3.4881 - val_acc: 0.2599

Epoch 00160: val_loss did not improve from 3.46259
Epoch 161/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7810 - acc: 0.5190 - val_loss: 3.7329 - val_acc: 0.2575

Epoch 00161: val_loss did not improve from 3.46259
Epoch 162/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7834 - acc: 0.5105 - val_loss: 4.2750 - val_acc: 0.2455

Epoch 00162: val_loss did not improve from 3.46259
Epoch 163/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7799 - acc: 0.5115 - val_loss: 3.9129 - val_acc: 0.2036

Epoch 00163: val_loss did not improve from 3.46259
Epoch 164/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7769 - acc: 0.5145 - val_loss: 3.6452 - val_acc: 0.2395

Epoch 00164: val_loss did not improve from 3.46259
Epoch 165/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7955 - acc: 0.5139 - val_loss: 3.7352 - val_acc: 0.2192

Epoch 00165: val_loss did not improve from 3.46259
Epoch 166/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7635 - acc: 0.5085 - val_loss: 4.1943 - val_acc: 0.2503

Epoch 00166: val_loss did not improve from 3.46259
Epoch 167/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7290 - acc: 0.5204 - val_loss: 3.7613 - val_acc: 0.2407

Epoch 00167: val_loss did not improve from 3.46259
Epoch 168/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7578 - acc: 0.5247 - val_loss: 3.7749 - val_acc: 0.2719

Epoch 00168: val_loss did not improve from 3.46259
Epoch 169/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7494 - acc: 0.5259 - val_loss: 3.7768 - val_acc: 0.2623

Epoch 00169: val_loss did not improve from 3.46259
Epoch 170/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7540 - acc: 0.5183 - val_loss: 4.0094 - val_acc: 0.2467

Epoch 00170: val_loss did not improve from 3.46259
Epoch 171/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7497 - acc: 0.5205 - val_loss: 3.5294 - val_acc: 0.2419

Epoch 00171: val_loss did not improve from 3.46259
Epoch 172/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7281 - acc: 0.5225 - val_loss: 4.1218 - val_acc: 0.2311

Epoch 00172: val_loss did not improve from 3.46259
Epoch 173/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7327 - acc: 0.5269 - val_loss: 3.9634 - val_acc: 0.2371

Epoch 00173: val_loss did not improve from 3.46259
Epoch 174/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7061 - acc: 0.5335 - val_loss: 3.8390 - val_acc: 0.2491

Epoch 00174: val_loss did not improve from 3.46259
Epoch 175/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7276 - acc: 0.5159 - val_loss: 4.2438 - val_acc: 0.2311

Epoch 00175: val_loss did not improve from 3.46259
Epoch 176/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7085 - acc: 0.5262 - val_loss: 3.5287 - val_acc: 0.2395

Epoch 00176: val_loss did not improve from 3.46259
Epoch 177/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6921 - acc: 0.5341 - val_loss: 3.7089 - val_acc: 0.2671

Epoch 00177: val_loss did not improve from 3.46259
Epoch 178/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.7239 - acc: 0.5187 - val_loss: 4.2073 - val_acc: 0.2419

Epoch 00178: val_loss did not improve from 3.46259
Epoch 179/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6844 - acc: 0.5412 - val_loss: 4.0771 - val_acc: 0.2455

Epoch 00179: val_loss did not improve from 3.46259
Epoch 180/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6940 - acc: 0.5377 - val_loss: 3.8313 - val_acc: 0.2467

Epoch 00180: val_loss did not improve from 3.46259
Epoch 181/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6902 - acc: 0.5329 - val_loss: 3.8899 - val_acc: 0.2515

Epoch 00181: val_loss did not improve from 3.46259
Epoch 182/200
6680/6680 [==============================] - 17s 2ms/step - loss: 1.6905 - acc: 0.5365 - val_loss: 3.7646 - val_acc: 0.2695

Epoch 00182: val_loss did not improve from 3.46259
Epoch 183/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6816 - acc: 0.5406 - val_loss: 3.7362 - val_acc: 0.2659

Epoch 00183: val_loss did not improve from 3.46259
Epoch 184/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6720 - acc: 0.5359 - val_loss: 3.6386 - val_acc: 0.2551

Epoch 00184: val_loss did not improve from 3.46259
Epoch 185/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6479 - acc: 0.5449 - val_loss: 3.7829 - val_acc: 0.2683

Epoch 00185: val_loss did not improve from 3.46259
Epoch 186/200
6680/6680 [==============================] - 17s 2ms/step - loss: 1.6800 - acc: 0.5322 - val_loss: 3.8990 - val_acc: 0.2599

Epoch 00186: val_loss did not improve from 3.46259
Epoch 187/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6420 - acc: 0.5472 - val_loss: 4.0441 - val_acc: 0.2647

Epoch 00187: val_loss did not improve from 3.46259
Epoch 188/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6594 - acc: 0.5332 - val_loss: 4.4382 - val_acc: 0.2323

Epoch 00188: val_loss did not improve from 3.46259
Epoch 189/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6614 - acc: 0.5430 - val_loss: 3.7678 - val_acc: 0.2407

Epoch 00189: val_loss did not improve from 3.46259
Epoch 190/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6319 - acc: 0.5485 - val_loss: 3.8151 - val_acc: 0.2743

Epoch 00190: val_loss did not improve from 3.46259
Epoch 191/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6446 - acc: 0.5485 - val_loss: 4.2256 - val_acc: 0.2515

Epoch 00191: val_loss did not improve from 3.46259
Epoch 192/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6415 - acc: 0.5410 - val_loss: 4.0645 - val_acc: 0.2695

Epoch 00192: val_loss did not improve from 3.46259
Epoch 193/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6688 - acc: 0.5412 - val_loss: 3.9719 - val_acc: 0.2311

Epoch 00193: val_loss did not improve from 3.46259
Epoch 194/200
6680/6680 [==============================] - 17s 2ms/step - loss: 1.6042 - acc: 0.5591 - val_loss: 3.8964 - val_acc: 0.2455

Epoch 00194: val_loss did not improve from 3.46259
Epoch 195/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6185 - acc: 0.5475 - val_loss: 3.9227 - val_acc: 0.2479

Epoch 00195: val_loss did not improve from 3.46259
Epoch 196/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6163 - acc: 0.5524 - val_loss: 4.0478 - val_acc: 0.2491

Epoch 00196: val_loss did not improve from 3.46259
Epoch 197/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6138 - acc: 0.5504 - val_loss: 4.1118 - val_acc: 0.2623

Epoch 00197: val_loss did not improve from 3.46259
Epoch 198/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6216 - acc: 0.5525 - val_loss: 3.8026 - val_acc: 0.2311

Epoch 00198: val_loss did not improve from 3.46259
Epoch 199/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.5830 - acc: 0.5615 - val_loss: 4.3001 - val_acc: 0.2491

Epoch 00199: val_loss did not improve from 3.46259
Epoch 200/200
6680/6680 [==============================] - 16s 2ms/step - loss: 1.6036 - acc: 0.5534 - val_loss: 3.6789 - val_acc: 0.2407

Epoch 00200: val_loss did not improve from 3.46259
Constructing predictor...

Transfer-learning based dog-breed predictor

Using VGG16, VGG19, InceptionV3 architectures.

In [27]:
def build_finetuned_dogbreed_predictor(dog_train, dog_val, model_class,
                                       label, preprocess_input_func):
    """Builds a Predictor class encapsulating finetuned model.
    
    Parameters:
    -----------
    dog_train: tuple
        Container with train images filepaths and train targets.
    dog_val: tuple
        Container with validation images filepaths and validation targets.
    model_class: function
        Keras function for loading transfer learned model like VGG19, ResNet50 etc.    
    label: str
        Model label for ModelCheckpoint output filename and Predictor identification.
    preprocess_input_func: function
        Function for preprocessing inputs for choosen network architecture. 
        
    Returns:
    -----------
    predictor: Predictor
        Class capable of making prediction with model and it preprocessing functions
        stored inside kwargs.
    """  
    def _dogbreed_predictor(img_path, **kwargs):
        clf = kwargs.get("model")
        to_tensor_function = kwargs.get("to_tensor")
        preprocess_input_fucntion = kwargs.get("preprocess_input")
        
        img = to_tensor_function(img_path)
        img = preprocess_input_fucntion(img)
        
        result = np.argmax(clf.predict(img))
        
        return result
    
    def _train_model(model_class, label, dog_train_i, dog_train_t, dog_val_i, dog_val_t):
        """Function for performing trainsfer_learning on specified architecture.
        Model is loaded without top dense layers and with 'imagenet' weights. All
        layers are frozen and trainable GlobalAveragePooling2D + output dense 
        layers are added. It saves best model to SAVED_MODELS_DIR.

        Parameters:
        -----------
        model_class: function
            Keras function for loading transfer learned model like VGG19, ResNet50 etc.
        label: str
            Model label for ModelCheckpoint output filename.
        dog_train_i: numpy.ndarray
            Array with loaded train images into numpy.ndarray format.
        dog_train_t: numpy.ndarray
            Array with one-hot encoded train targets.
        dog_val_i: numpy.ndarray
            Array with loaded validation images into numpy.ndarray format.
        dog_val_t: numpy.ndarray
            Array with one-hot encoded validation targets.

        Returns:
        -----------
        model: Sequential
            Trained model with loaded weights from best epoch.
        history: dict
            Dictionary containing data about model training process.
        """
        model = model_class(weights="imagenet", include_top=False)
        
        last_layer_name = model.layers[-1].name
        
        last_layer = model.get_layer(last_layer_name).output
        x = GlobalAveragePooling2D(name="g_avg_pooling_2d")(last_layer)
        out = Dense(AIND_DOG_CLASSES_NUM, activation="softmax", name="custom_fc1")(x)

        custom_model = Model(model.input, out)
        
        for layer in custom_model.layers:
            if layer.name != "g_avg_pooling_2d":
                layer.trainable = False
            else:
                break

        custom_model.summary()
        
        custom_model.compile(
            optimizer=RMSprop(0.001), loss="categorical_crossentropy", metrics=["accuracy"]
        )

        best_model_filepath = os.path.join(SAVED_MODELS_DIR, "{}_dogbreed.h5".format(label))
        checkpointer = ModelCheckpoint(
            filepath=best_model_filepath, verbose=1, save_best_only=True, 
            save_weights_only=False
        )


        history = custom_model.fit(
            dog_train_i, dog_train_t, 
            validation_data=(dog_val_i, dog_val_t),
            epochs=50, batch_size=32, callbacks=[checkpointer], verbose=1
        )

        custom_model = load_model(best_model_filepath, compile=False)
        
        return history, custom_model
        
    dog_train_i, dog_train_t = dog_train
    dog_val_i, dog_val_t = dog_val
    
    print("Converting dog-breed train images to tensor...")
    time.sleep(0.25)
    dog_train_i = paths_to_tensor(dog_train_i)
    
    print("Converting dog-breed validation images to tensor...")
    time.sleep(0.25)
    dog_val_i = paths_to_tensor(dog_val_i)
    
    print("Normalizing dog-breed train data...")
    dog_train_i = preprocess_input_func(dog_train_i)
    
    print("Normalizing dog-breed val data...")
    dog_val_i = preprocess_input_func(dog_val_i)
    
    print("Training model...")
    history, model = _train_model(
        model_class, label, dog_train_i, dog_train_t, dog_val_i, dog_val_t)
    display_training_history(history, label)

    print("Constructing predictor...")    
    kwargs = {
        "model": model,
        "to_tensor": path_to_tensor,
        "preprocess_input": preprocess_input_func
    }
    
    predictor = Predictor(_dogbreed_predictor, label, **kwargs)
    
    return predictor
  • Build predictors
In [28]:
predictor = build_finetuned_dogbreed_predictor(
    dog_train, dog_val, VGG16, "vgg16", vgg16_preprocess_input)
predictors[PREDICTOR_TYPE_DOG_BREED][predictor.label] = predictor
Converting dog-breed train images to tensor...
100%|██████████| 6680/6680 [01:37<00:00, 68.24it/s]
Converting dog-breed validation images to tensor...
100%|██████████| 835/835 [00:09<00:00, 84.87it/s] 
Normalizing dog-breed train data...
Normalizing dog-breed val data...
Training model...
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         (None, None, None, 3)     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, None, None, 64)    1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, None, None, 64)    36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, None, None, 64)    0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, None, None, 128)   73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, None, None, 128)   147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, None, None, 128)   0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, None, None, 256)   295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, None, None, 256)   0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, None, None, 512)   1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, None, None, 512)   0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, None, None, 512)   0         
_________________________________________________________________
g_avg_pooling_2d (GlobalAver (None, 512)               0         
_________________________________________________________________
custom_fc1 (Dense)           (None, 133)               68229     
=================================================================
Total params: 14,782,917
Trainable params: 68,229
Non-trainable params: 14,714,688
_________________________________________________________________
Train on 6680 samples, validate on 835 samples
Epoch 1/50
6680/6680 [==============================] - 34s 5ms/step - loss: 11.9159 - acc: 0.1204 - val_loss: 10.1135 - val_acc: 0.2311

Epoch 00001: val_loss improved from inf to 10.11346, saving model to saved_models/vgg16_dogbreed.h5
Epoch 2/50
6680/6680 [==============================] - 32s 5ms/step - loss: 9.1852 - acc: 0.3106 - val_loss: 9.3102 - val_acc: 0.3030

Epoch 00002: val_loss improved from 10.11346 to 9.31019, saving model to saved_models/vgg16_dogbreed.h5
Epoch 3/50
6680/6680 [==============================] - 31s 5ms/step - loss: 8.4702 - acc: 0.3888 - val_loss: 8.7219 - val_acc: 0.3377

Epoch 00003: val_loss improved from 9.31019 to 8.72186, saving model to saved_models/vgg16_dogbreed.h5
Epoch 4/50
6680/6680 [==============================] - 32s 5ms/step - loss: 7.7286 - acc: 0.4368 - val_loss: 8.0893 - val_acc: 0.3844

Epoch 00004: val_loss improved from 8.72186 to 8.08933, saving model to saved_models/vgg16_dogbreed.h5
Epoch 5/50
6680/6680 [==============================] - 32s 5ms/step - loss: 7.3206 - acc: 0.4918 - val_loss: 7.9439 - val_acc: 0.4000

Epoch 00005: val_loss improved from 8.08933 to 7.94392, saving model to saved_models/vgg16_dogbreed.h5
Epoch 6/50
6680/6680 [==============================] - 31s 5ms/step - loss: 7.1499 - acc: 0.5150 - val_loss: 7.7696 - val_acc: 0.4168

Epoch 00006: val_loss improved from 7.94392 to 7.76962, saving model to saved_models/vgg16_dogbreed.h5
Epoch 7/50
6680/6680 [==============================] - 31s 5ms/step - loss: 6.9930 - acc: 0.5310 - val_loss: 7.7377 - val_acc: 0.4287

Epoch 00007: val_loss improved from 7.76962 to 7.73768, saving model to saved_models/vgg16_dogbreed.h5
Epoch 8/50
6680/6680 [==============================] - 32s 5ms/step - loss: 6.8494 - acc: 0.5433 - val_loss: 7.6304 - val_acc: 0.4467

Epoch 00008: val_loss improved from 7.73768 to 7.63038, saving model to saved_models/vgg16_dogbreed.h5
Epoch 9/50
6680/6680 [==============================] - 31s 5ms/step - loss: 6.7848 - acc: 0.5588 - val_loss: 7.5746 - val_acc: 0.4467

Epoch 00009: val_loss improved from 7.63038 to 7.57456, saving model to saved_models/vgg16_dogbreed.h5
Epoch 10/50
6680/6680 [==============================] - 32s 5ms/step - loss: 6.5905 - acc: 0.5686 - val_loss: 7.4039 - val_acc: 0.4467

Epoch 00010: val_loss improved from 7.57456 to 7.40391, saving model to saved_models/vgg16_dogbreed.h5
Epoch 11/50
6680/6680 [==============================] - 32s 5ms/step - loss: 6.4541 - acc: 0.5795 - val_loss: 7.2331 - val_acc: 0.4671

Epoch 00011: val_loss improved from 7.40391 to 7.23312, saving model to saved_models/vgg16_dogbreed.h5
Epoch 12/50
6680/6680 [==============================] - 32s 5ms/step - loss: 6.3414 - acc: 0.5895 - val_loss: 7.2421 - val_acc: 0.4623

Epoch 00012: val_loss did not improve from 7.23312
Epoch 13/50
6680/6680 [==============================] - 32s 5ms/step - loss: 6.1597 - acc: 0.6016 - val_loss: 7.1135 - val_acc: 0.4671

Epoch 00013: val_loss improved from 7.23312 to 7.11349, saving model to saved_models/vgg16_dogbreed.h5
Epoch 14/50
6680/6680 [==============================] - 32s 5ms/step - loss: 6.0950 - acc: 0.6126 - val_loss: 7.0101 - val_acc: 0.4814

Epoch 00014: val_loss improved from 7.11349 to 7.01005, saving model to saved_models/vgg16_dogbreed.h5
Epoch 15/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.9976 - acc: 0.6142 - val_loss: 7.0621 - val_acc: 0.4671

Epoch 00015: val_loss did not improve from 7.01005
Epoch 16/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.8403 - acc: 0.6222 - val_loss: 6.8041 - val_acc: 0.4814

Epoch 00016: val_loss improved from 7.01005 to 6.80405, saving model to saved_models/vgg16_dogbreed.h5
Epoch 17/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.7099 - acc: 0.6305 - val_loss: 6.7760 - val_acc: 0.4850

Epoch 00017: val_loss improved from 6.80405 to 6.77605, saving model to saved_models/vgg16_dogbreed.h5
Epoch 18/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.6443 - acc: 0.6362 - val_loss: 6.7757 - val_acc: 0.4898

Epoch 00018: val_loss improved from 6.77605 to 6.77575, saving model to saved_models/vgg16_dogbreed.h5
Epoch 19/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.5502 - acc: 0.6428 - val_loss: 6.5639 - val_acc: 0.5030

Epoch 00019: val_loss improved from 6.77575 to 6.56388, saving model to saved_models/vgg16_dogbreed.h5
Epoch 20/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.4940 - acc: 0.6506 - val_loss: 6.5117 - val_acc: 0.5054

Epoch 00020: val_loss improved from 6.56388 to 6.51171, saving model to saved_models/vgg16_dogbreed.h5
Epoch 21/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.3779 - acc: 0.6570 - val_loss: 6.4365 - val_acc: 0.5042

Epoch 00021: val_loss improved from 6.51171 to 6.43655, saving model to saved_models/vgg16_dogbreed.h5
Epoch 22/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.3489 - acc: 0.6641 - val_loss: 6.4643 - val_acc: 0.5102

Epoch 00022: val_loss did not improve from 6.43655
Epoch 23/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.3152 - acc: 0.6644 - val_loss: 6.4125 - val_acc: 0.5138

Epoch 00023: val_loss improved from 6.43655 to 6.41251, saving model to saved_models/vgg16_dogbreed.h5
Epoch 24/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.2896 - acc: 0.6680 - val_loss: 6.3933 - val_acc: 0.5066

Epoch 00024: val_loss improved from 6.41251 to 6.39332, saving model to saved_models/vgg16_dogbreed.h5
Epoch 25/50
6680/6680 [==============================] - 32s 5ms/step - loss: 5.2202 - acc: 0.6669 - val_loss: 6.5078 - val_acc: 0.4982

Epoch 00025: val_loss did not improve from 6.39332
Epoch 26/50
6680/6680 [==============================] - 31s 5ms/step - loss: 4.9943 - acc: 0.6711 - val_loss: 6.2693 - val_acc: 0.5138

Epoch 00026: val_loss improved from 6.39332 to 6.26932, saving model to saved_models/vgg16_dogbreed.h5
Epoch 27/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.8044 - acc: 0.6846 - val_loss: 6.0290 - val_acc: 0.5234

Epoch 00027: val_loss improved from 6.26932 to 6.02896, saving model to saved_models/vgg16_dogbreed.h5
Epoch 28/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.6901 - acc: 0.6979 - val_loss: 5.8924 - val_acc: 0.5305

Epoch 00028: val_loss improved from 6.02896 to 5.89245, saving model to saved_models/vgg16_dogbreed.h5
Epoch 29/50
6680/6680 [==============================] - 31s 5ms/step - loss: 4.6634 - acc: 0.7040 - val_loss: 5.9515 - val_acc: 0.5377

Epoch 00029: val_loss did not improve from 5.89245
Epoch 30/50
6680/6680 [==============================] - 31s 5ms/step - loss: 4.6518 - acc: 0.7070 - val_loss: 5.9097 - val_acc: 0.5437

Epoch 00030: val_loss did not improve from 5.89245
Epoch 31/50
6680/6680 [==============================] - 31s 5ms/step - loss: 4.6295 - acc: 0.7076 - val_loss: 5.8978 - val_acc: 0.5413

Epoch 00031: val_loss did not improve from 5.89245
Epoch 32/50
6680/6680 [==============================] - 31s 5ms/step - loss: 4.5092 - acc: 0.7045 - val_loss: 5.8295 - val_acc: 0.5401

Epoch 00032: val_loss improved from 5.89245 to 5.82950, saving model to saved_models/vgg16_dogbreed.h5
Epoch 33/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.3933 - acc: 0.7163 - val_loss: 5.7253 - val_acc: 0.5437

Epoch 00033: val_loss improved from 5.82950 to 5.72531, saving model to saved_models/vgg16_dogbreed.h5
Epoch 34/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.2939 - acc: 0.7249 - val_loss: 5.7496 - val_acc: 0.5485

Epoch 00034: val_loss did not improve from 5.72531
Epoch 35/50
6680/6680 [==============================] - 31s 5ms/step - loss: 4.2520 - acc: 0.7272 - val_loss: 5.6237 - val_acc: 0.5677

Epoch 00035: val_loss improved from 5.72531 to 5.62368, saving model to saved_models/vgg16_dogbreed.h5
Epoch 36/50
6680/6680 [==============================] - 31s 5ms/step - loss: 4.2305 - acc: 0.7316 - val_loss: 5.6460 - val_acc: 0.5593

Epoch 00036: val_loss did not improve from 5.62368
Epoch 37/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.2224 - acc: 0.7334 - val_loss: 5.6874 - val_acc: 0.5593

Epoch 00037: val_loss did not improve from 5.62368
Epoch 38/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.2097 - acc: 0.7350 - val_loss: 5.6452 - val_acc: 0.5677

Epoch 00038: val_loss did not improve from 5.62368
Epoch 39/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.2051 - acc: 0.7362 - val_loss: 5.6060 - val_acc: 0.5713

Epoch 00039: val_loss improved from 5.62368 to 5.60601, saving model to saved_models/vgg16_dogbreed.h5
Epoch 40/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.1847 - acc: 0.7361 - val_loss: 5.5871 - val_acc: 0.5569

Epoch 00040: val_loss improved from 5.60601 to 5.58706, saving model to saved_models/vgg16_dogbreed.h5
Epoch 41/50
6680/6680 [==============================] - 32s 5ms/step - loss: 4.1190 - acc: 0.7379 - val_loss: 5.5838 - val_acc: 0.5760

Epoch 00041: val_loss improved from 5.58706 to 5.58375, saving model to saved_models/vgg16_dogbreed.h5
Epoch 42/50
6680/6680 [==============================] - 32s 5ms/step - loss: 3.9954 - acc: 0.7421 - val_loss: 5.4526 - val_acc: 0.5665

Epoch 00042: val_loss improved from 5.58375 to 5.45257, saving model to saved_models/vgg16_dogbreed.h5
Epoch 43/50
6680/6680 [==============================] - 32s 5ms/step - loss: 3.8931 - acc: 0.7521 - val_loss: 5.4203 - val_acc: 0.5749

Epoch 00043: val_loss improved from 5.45257 to 5.42027, saving model to saved_models/vgg16_dogbreed.h5
Epoch 44/50
6680/6680 [==============================] - 32s 5ms/step - loss: 3.8738 - acc: 0.7546 - val_loss: 5.3807 - val_acc: 0.5904

Epoch 00044: val_loss improved from 5.42027 to 5.38066, saving model to saved_models/vgg16_dogbreed.h5
Epoch 45/50
6680/6680 [==============================] - 32s 5ms/step - loss: 3.8616 - acc: 0.7566 - val_loss: 5.4568 - val_acc: 0.5796

Epoch 00045: val_loss did not improve from 5.38066
Epoch 46/50
6680/6680 [==============================] - 32s 5ms/step - loss: 3.8572 - acc: 0.7578 - val_loss: 5.3915 - val_acc: 0.5856

Epoch 00046: val_loss did not improve from 5.38066
Epoch 47/50
6680/6680 [==============================] - 32s 5ms/step - loss: 3.8563 - acc: 0.7591 - val_loss: 5.3632 - val_acc: 0.5844

Epoch 00047: val_loss improved from 5.38066 to 5.36317, saving model to saved_models/vgg16_dogbreed.h5
Epoch 48/50
6680/6680 [==============================] - 32s 5ms/step - loss: 3.8531 - acc: 0.7597 - val_loss: 5.4059 - val_acc: 0.5952

Epoch 00048: val_loss did not improve from 5.36317
Epoch 49/50
6680/6680 [==============================] - 31s 5ms/step - loss: 3.8508 - acc: 0.7605 - val_loss: 5.3315 - val_acc: 0.6012

Epoch 00049: val_loss improved from 5.36317 to 5.33152, saving model to saved_models/vgg16_dogbreed.h5
Epoch 50/50
6680/6680 [==============================] - 32s 5ms/step - loss: 3.8519 - acc: 0.7603 - val_loss: 5.4241 - val_acc: 0.5880

Epoch 00050: val_loss did not improve from 5.33152
Constructing predictor...
In [29]:
predictor = build_finetuned_dogbreed_predictor(
    dog_train, dog_val, VGG19, "vgg19", vgg19_preprocess_input)
predictors[PREDICTOR_TYPE_DOG_BREED][predictor.label] = predictor
Converting dog-breed train images to tensor...
100%|██████████| 6680/6680 [00:34<00:00, 131.91it/s]
Converting dog-breed validation images to tensor...
100%|██████████| 835/835 [00:06<00:00, 129.08it/s]
Normalizing dog-breed train data...
Normalizing dog-breed val data...
Training model...
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         (None, None, None, 3)     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, None, None, 64)    1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, None, None, 64)    36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, None, None, 64)    0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, None, None, 128)   73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, None, None, 128)   147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, None, None, 128)   0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, None, None, 256)   295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_conv4 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, None, None, 256)   0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, None, None, 512)   1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_conv4 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, None, None, 512)   0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv4 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, None, None, 512)   0         
_________________________________________________________________
g_avg_pooling_2d (GlobalAver (None, 512)               0         
_________________________________________________________________
custom_fc1 (Dense)           (None, 133)               68229     
=================================================================
Total params: 20,092,613
Trainable params: 68,229
Non-trainable params: 20,024,384
_________________________________________________________________
Train on 6680 samples, validate on 835 samples
Epoch 1/50
6680/6680 [==============================] - 37s 6ms/step - loss: 11.9363 - acc: 0.1205 - val_loss: 10.2750 - val_acc: 0.2060

Epoch 00001: val_loss improved from inf to 10.27497, saving model to saved_models/vgg19_dogbreed.h5
Epoch 2/50
6680/6680 [==============================] - 36s 5ms/step - loss: 9.2930 - acc: 0.3060 - val_loss: 9.2263 - val_acc: 0.3042

Epoch 00002: val_loss improved from 10.27497 to 9.22627, saving model to saved_models/vgg19_dogbreed.h5
Epoch 3/50
6680/6680 [==============================] - 36s 5ms/step - loss: 8.6842 - acc: 0.3825 - val_loss: 8.9064 - val_acc: 0.3389

Epoch 00003: val_loss improved from 9.22627 to 8.90641, saving model to saved_models/vgg19_dogbreed.h5
Epoch 4/50
6680/6680 [==============================] - 36s 5ms/step - loss: 8.3254 - acc: 0.4263 - val_loss: 8.7772 - val_acc: 0.3689

Epoch 00004: val_loss improved from 8.90641 to 8.77723, saving model to saved_models/vgg19_dogbreed.h5
Epoch 5/50
6680/6680 [==============================] - 37s 6ms/step - loss: 8.1993 - acc: 0.4497 - val_loss: 8.6984 - val_acc: 0.3844

Epoch 00005: val_loss improved from 8.77723 to 8.69838, saving model to saved_models/vgg19_dogbreed.h5
Epoch 6/50
6680/6680 [==============================] - 38s 6ms/step - loss: 8.1222 - acc: 0.4657 - val_loss: 8.5682 - val_acc: 0.3952

Epoch 00006: val_loss improved from 8.69838 to 8.56817, saving model to saved_models/vgg19_dogbreed.h5
Epoch 7/50
6680/6680 [==============================] - 39s 6ms/step - loss: 8.0001 - acc: 0.4792 - val_loss: 8.5694 - val_acc: 0.3844

Epoch 00007: val_loss did not improve from 8.56817
Epoch 8/50
6680/6680 [==============================] - 38s 6ms/step - loss: 7.9022 - acc: 0.4912 - val_loss: 8.4000 - val_acc: 0.3976

Epoch 00008: val_loss improved from 8.56817 to 8.39997, saving model to saved_models/vgg19_dogbreed.h5
Epoch 9/50
6680/6680 [==============================] - 38s 6ms/step - loss: 7.8046 - acc: 0.5012 - val_loss: 8.4019 - val_acc: 0.4036

Epoch 00009: val_loss did not improve from 8.39997
Epoch 10/50
6680/6680 [==============================] - 37s 6ms/step - loss: 7.6273 - acc: 0.5049 - val_loss: 8.0333 - val_acc: 0.4251

Epoch 00010: val_loss improved from 8.39997 to 8.03327, saving model to saved_models/vgg19_dogbreed.h5
Epoch 11/50
6680/6680 [==============================] - 38s 6ms/step - loss: 7.3399 - acc: 0.5287 - val_loss: 7.9044 - val_acc: 0.4180

Epoch 00011: val_loss improved from 8.03327 to 7.90438, saving model to saved_models/vgg19_dogbreed.h5
Epoch 12/50
6680/6680 [==============================] - 37s 5ms/step - loss: 7.2057 - acc: 0.5400 - val_loss: 7.7880 - val_acc: 0.4359

Epoch 00012: val_loss improved from 7.90438 to 7.78796, saving model to saved_models/vgg19_dogbreed.h5
Epoch 13/50
6680/6680 [==============================] - 37s 5ms/step - loss: 7.0274 - acc: 0.5494 - val_loss: 7.6469 - val_acc: 0.4491

Epoch 00013: val_loss improved from 7.78796 to 7.64688, saving model to saved_models/vgg19_dogbreed.h5
Epoch 14/50
6680/6680 [==============================] - 37s 5ms/step - loss: 6.8583 - acc: 0.5600 - val_loss: 7.6114 - val_acc: 0.4479

Epoch 00014: val_loss improved from 7.64688 to 7.61140, saving model to saved_models/vgg19_dogbreed.h5
Epoch 15/50
6680/6680 [==============================] - 37s 5ms/step - loss: 6.7196 - acc: 0.5657 - val_loss: 7.3667 - val_acc: 0.4599

Epoch 00015: val_loss improved from 7.61140 to 7.36665, saving model to saved_models/vgg19_dogbreed.h5
Epoch 16/50
6680/6680 [==============================] - 36s 5ms/step - loss: 6.5429 - acc: 0.5784 - val_loss: 7.2496 - val_acc: 0.4623

Epoch 00016: val_loss improved from 7.36665 to 7.24962, saving model to saved_models/vgg19_dogbreed.h5
Epoch 17/50
6680/6680 [==============================] - 36s 5ms/step - loss: 6.3709 - acc: 0.5895 - val_loss: 7.1270 - val_acc: 0.4695

Epoch 00017: val_loss improved from 7.24962 to 7.12697, saving model to saved_models/vgg19_dogbreed.h5
Epoch 18/50
6680/6680 [==============================] - 37s 5ms/step - loss: 6.2769 - acc: 0.5991 - val_loss: 7.1009 - val_acc: 0.4778

Epoch 00018: val_loss improved from 7.12697 to 7.10088, saving model to saved_models/vgg19_dogbreed.h5
Epoch 19/50
6680/6680 [==============================] - 36s 5ms/step - loss: 5.9973 - acc: 0.6036 - val_loss: 6.6962 - val_acc: 0.4946

Epoch 00019: val_loss improved from 7.10088 to 6.69623, saving model to saved_models/vgg19_dogbreed.h5
Epoch 20/50
6680/6680 [==============================] - 37s 5ms/step - loss: 5.8234 - acc: 0.6259 - val_loss: 6.7076 - val_acc: 0.4970

Epoch 00020: val_loss did not improve from 6.69623
Epoch 21/50
6680/6680 [==============================] - 36s 5ms/step - loss: 5.7944 - acc: 0.6329 - val_loss: 6.5560 - val_acc: 0.5174

Epoch 00021: val_loss improved from 6.69623 to 6.55597, saving model to saved_models/vgg19_dogbreed.h5
Epoch 22/50
6680/6680 [==============================] - 36s 5ms/step - loss: 5.7631 - acc: 0.6343 - val_loss: 6.5717 - val_acc: 0.5066

Epoch 00022: val_loss did not improve from 6.55597
Epoch 23/50
6680/6680 [==============================] - 37s 5ms/step - loss: 5.7068 - acc: 0.6392 - val_loss: 6.5731 - val_acc: 0.5150

Epoch 00023: val_loss did not improve from 6.55597
Epoch 24/50
6680/6680 [==============================] - 36s 5ms/step - loss: 5.6502 - acc: 0.6427 - val_loss: 6.4508 - val_acc: 0.5150

Epoch 00024: val_loss improved from 6.55597 to 6.45079, saving model to saved_models/vgg19_dogbreed.h5
Epoch 25/50
6680/6680 [==============================] - 36s 5ms/step - loss: 5.5065 - acc: 0.6527 - val_loss: 6.3052 - val_acc: 0.5269

Epoch 00025: val_loss improved from 6.45079 to 6.30521, saving model to saved_models/vgg19_dogbreed.h5
Epoch 26/50
6680/6680 [==============================] - 37s 5ms/step - loss: 5.4377 - acc: 0.6560 - val_loss: 6.3198 - val_acc: 0.5353

Epoch 00026: val_loss did not improve from 6.30521
Epoch 27/50
6680/6680 [==============================] - 36s 5ms/step - loss: 5.3359 - acc: 0.6648 - val_loss: 6.2816 - val_acc: 0.5257

Epoch 00027: val_loss improved from 6.30521 to 6.28165, saving model to saved_models/vgg19_dogbreed.h5
Epoch 28/50
6680/6680 [==============================] - 37s 5ms/step - loss: 5.2854 - acc: 0.6662 - val_loss: 6.3249 - val_acc: 0.5126

Epoch 00028: val_loss did not improve from 6.28165
Epoch 29/50
6680/6680 [==============================] - 37s 5ms/step - loss: 5.1817 - acc: 0.6675 - val_loss: 6.1744 - val_acc: 0.5353

Epoch 00029: val_loss improved from 6.28165 to 6.17440, saving model to saved_models/vgg19_dogbreed.h5
Epoch 30/50
6680/6680 [==============================] - 36s 5ms/step - loss: 5.1338 - acc: 0.6725 - val_loss: 6.2134 - val_acc: 0.5329

Epoch 00030: val_loss did not improve from 6.17440
Epoch 31/50
6680/6680 [==============================] - 37s 6ms/step - loss: 5.0514 - acc: 0.6780 - val_loss: 6.0814 - val_acc: 0.5425

Epoch 00031: val_loss improved from 6.17440 to 6.08142, saving model to saved_models/vgg19_dogbreed.h5
Epoch 32/50
6680/6680 [==============================] - 36s 5ms/step - loss: 5.0022 - acc: 0.6829 - val_loss: 6.0622 - val_acc: 0.5461

Epoch 00032: val_loss improved from 6.08142 to 6.06219, saving model to saved_models/vgg19_dogbreed.h5
Epoch 33/50
6680/6680 [==============================] - 37s 5ms/step - loss: 4.9900 - acc: 0.6847 - val_loss: 6.0209 - val_acc: 0.5425

Epoch 00033: val_loss improved from 6.06219 to 6.02092, saving model to saved_models/vgg19_dogbreed.h5
Epoch 34/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.9716 - acc: 0.6873 - val_loss: 6.0831 - val_acc: 0.5401

Epoch 00034: val_loss did not improve from 6.02092
Epoch 35/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.8921 - acc: 0.6901 - val_loss: 6.0525 - val_acc: 0.5389

Epoch 00035: val_loss did not improve from 6.02092
Epoch 36/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.8630 - acc: 0.6928 - val_loss: 5.9382 - val_acc: 0.5521

Epoch 00036: val_loss improved from 6.02092 to 5.93821, saving model to saved_models/vgg19_dogbreed.h5
Epoch 37/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.8490 - acc: 0.6972 - val_loss: 6.0476 - val_acc: 0.5437

Epoch 00037: val_loss did not improve from 5.93821
Epoch 38/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.8501 - acc: 0.6975 - val_loss: 6.0116 - val_acc: 0.5485

Epoch 00038: val_loss did not improve from 5.93821
Epoch 39/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.8322 - acc: 0.6976 - val_loss: 5.9630 - val_acc: 0.5485

Epoch 00039: val_loss did not improve from 5.93821
Epoch 40/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.7457 - acc: 0.6996 - val_loss: 5.8488 - val_acc: 0.5485

Epoch 00040: val_loss improved from 5.93821 to 5.84876, saving model to saved_models/vgg19_dogbreed.h5
Epoch 41/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.6774 - acc: 0.7069 - val_loss: 5.8513 - val_acc: 0.5557

Epoch 00041: val_loss did not improve from 5.84876
Epoch 42/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.6717 - acc: 0.7082 - val_loss: 5.7674 - val_acc: 0.5725

Epoch 00042: val_loss improved from 5.84876 to 5.76737, saving model to saved_models/vgg19_dogbreed.h5
Epoch 43/50
6680/6680 [==============================] - 37s 5ms/step - loss: 4.6688 - acc: 0.7093 - val_loss: 5.8203 - val_acc: 0.5653

Epoch 00043: val_loss did not improve from 5.76737
Epoch 44/50
6680/6680 [==============================] - 37s 5ms/step - loss: 4.6639 - acc: 0.7100 - val_loss: 5.8150 - val_acc: 0.5677

Epoch 00044: val_loss did not improve from 5.76737
Epoch 45/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.6380 - acc: 0.7091 - val_loss: 5.8785 - val_acc: 0.5509

Epoch 00045: val_loss did not improve from 5.76737
Epoch 46/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.5154 - acc: 0.7153 - val_loss: 5.7087 - val_acc: 0.5677

Epoch 00046: val_loss improved from 5.76737 to 5.70873, saving model to saved_models/vgg19_dogbreed.h5
Epoch 47/50
6680/6680 [==============================] - 37s 5ms/step - loss: 4.4518 - acc: 0.7175 - val_loss: 5.6578 - val_acc: 0.5760

Epoch 00047: val_loss improved from 5.70873 to 5.65777, saving model to saved_models/vgg19_dogbreed.h5
Epoch 48/50
6680/6680 [==============================] - 36s 5ms/step - loss: 4.4296 - acc: 0.7211 - val_loss: 5.6383 - val_acc: 0.5641

Epoch 00048: val_loss improved from 5.65777 to 5.63825, saving model to saved_models/vgg19_dogbreed.h5
Epoch 49/50
6680/6680 [==============================] - 38s 6ms/step - loss: 4.3471 - acc: 0.7232 - val_loss: 5.7482 - val_acc: 0.5425

Epoch 00049: val_loss did not improve from 5.63825
Epoch 50/50
6680/6680 [==============================] - 38s 6ms/step - loss: 4.1216 - acc: 0.7308 - val_loss: 5.4657 - val_acc: 0.5653

Epoch 00050: val_loss improved from 5.63825 to 5.46565, saving model to saved_models/vgg19_dogbreed.h5
Constructing predictor...
In [30]:
predictor = build_finetuned_dogbreed_predictor(
    dog_train, dog_val, InceptionV3, "inceptionV3", inceptionv3_preprocess_input)
predictors[PREDICTOR_TYPE_DOG_BREED][predictor.label] = predictor
Converting dog-breed train images to tensor...
100%|██████████| 6680/6680 [00:34<00:00, 132.22it/s]
Converting dog-breed validation images to tensor...
100%|██████████| 835/835 [00:03<00:00, 218.10it/s]
Normalizing dog-breed train data...
Normalizing dog-breed val data...
Training model...
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_4 (InputLayer)            (None, None, None, 3 0                                            
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, None, None, 3 864         input_4[0][0]                    
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, None, 3 96          conv2d_9[0][0]                   
__________________________________________________________________________________________________
activation_50 (Activation)      (None, None, None, 3 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, None, None, 3 9216        activation_50[0][0]              
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, None, 3 96          conv2d_10[0][0]                  
__________________________________________________________________________________________________
activation_51 (Activation)      (None, None, None, 3 0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, None, None, 6 18432       activation_51[0][0]              
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, None, None, 6 192         conv2d_11[0][0]                  
__________________________________________________________________________________________________
activation_52 (Activation)      (None, None, None, 6 0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
max_pooling2d_6 (MaxPooling2D)  (None, None, None, 6 0           activation_52[0][0]              
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, None, None, 8 5120        max_pooling2d_6[0][0]            
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, None, None, 8 240         conv2d_12[0][0]                  
__________________________________________________________________________________________________
activation_53 (Activation)      (None, None, None, 8 0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, None, None, 1 138240      activation_53[0][0]              
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, None, None, 1 576         conv2d_13[0][0]                  
__________________________________________________________________________________________________
activation_54 (Activation)      (None, None, None, 1 0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
max_pooling2d_7 (MaxPooling2D)  (None, None, None, 1 0           activation_54[0][0]              
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, None, None, 6 12288       max_pooling2d_7[0][0]            
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, None, None, 6 192         conv2d_17[0][0]                  
__________________________________________________________________________________________________
activation_58 (Activation)      (None, None, None, 6 0           batch_normalization_9[0][0]      
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, None, None, 4 9216        max_pooling2d_7[0][0]            
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, None, None, 9 55296       activation_58[0][0]              
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, None, None, 4 144         conv2d_15[0][0]                  
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, None, None, 9 288         conv2d_18[0][0]                  
__________________________________________________________________________________________________
activation_56 (Activation)      (None, None, None, 4 0           batch_normalization_7[0][0]      
__________________________________________________________________________________________________
activation_59 (Activation)      (None, None, None, 9 0           batch_normalization_10[0][0]     
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, None, None, 1 0           max_pooling2d_7[0][0]            
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, None, None, 6 12288       max_pooling2d_7[0][0]            
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, None, None, 6 76800       activation_56[0][0]              
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, None, None, 9 82944       activation_59[0][0]              
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, None, None, 3 6144        average_pooling2d_1[0][0]        
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, None, None, 6 192         conv2d_14[0][0]                  
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, None, None, 6 192         conv2d_16[0][0]                  
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, None, None, 9 288         conv2d_19[0][0]                  
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, None, None, 3 96          conv2d_20[0][0]                  
__________________________________________________________________________________________________
activation_55 (Activation)      (None, None, None, 6 0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
activation_57 (Activation)      (None, None, None, 6 0           batch_normalization_8[0][0]      
__________________________________________________________________________________________________
activation_60 (Activation)      (None, None, None, 9 0           batch_normalization_11[0][0]     
__________________________________________________________________________________________________
activation_61 (Activation)      (None, None, None, 3 0           batch_normalization_12[0][0]     
__________________________________________________________________________________________________
mixed0 (Concatenate)            (None, None, None, 2 0           activation_55[0][0]              
                                                                 activation_57[0][0]              
                                                                 activation_60[0][0]              
                                                                 activation_61[0][0]              
__________________________________________________________________________________________________
conv2d_24 (Conv2D)              (None, None, None, 6 16384       mixed0[0][0]                     
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, None, None, 6 192         conv2d_24[0][0]                  
__________________________________________________________________________________________________
activation_65 (Activation)      (None, None, None, 6 0           batch_normalization_16[0][0]     
__________________________________________________________________________________________________
conv2d_22 (Conv2D)              (None, None, None, 4 12288       mixed0[0][0]                     
__________________________________________________________________________________________________
conv2d_25 (Conv2D)              (None, None, None, 9 55296       activation_65[0][0]              
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, None, None, 4 144         conv2d_22[0][0]                  
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, None, None, 9 288         conv2d_25[0][0]                  
__________________________________________________________________________________________________
activation_63 (Activation)      (None, None, None, 4 0           batch_normalization_14[0][0]     
__________________________________________________________________________________________________
activation_66 (Activation)      (None, None, None, 9 0           batch_normalization_17[0][0]     
__________________________________________________________________________________________________
average_pooling2d_2 (AveragePoo (None, None, None, 2 0           mixed0[0][0]                     
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (None, None, None, 6 16384       mixed0[0][0]                     
__________________________________________________________________________________________________
conv2d_23 (Conv2D)              (None, None, None, 6 76800       activation_63[0][0]              
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (None, None, None, 9 82944       activation_66[0][0]              
__________________________________________________________________________________________________
conv2d_27 (Conv2D)              (None, None, None, 6 16384       average_pooling2d_2[0][0]        
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, None, None, 6 192         conv2d_21[0][0]                  
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, None, None, 6 192         conv2d_23[0][0]                  
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, None, None, 9 288         conv2d_26[0][0]                  
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, None, None, 6 192         conv2d_27[0][0]                  
__________________________________________________________________________________________________
activation_62 (Activation)      (None, None, None, 6 0           batch_normalization_13[0][0]     
__________________________________________________________________________________________________
activation_64 (Activation)      (None, None, None, 6 0           batch_normalization_15[0][0]     
__________________________________________________________________________________________________
activation_67 (Activation)      (None, None, None, 9 0           batch_normalization_18[0][0]     
__________________________________________________________________________________________________
activation_68 (Activation)      (None, None, None, 6 0           batch_normalization_19[0][0]     
__________________________________________________________________________________________________
mixed1 (Concatenate)            (None, None, None, 2 0           activation_62[0][0]              
                                                                 activation_64[0][0]              
                                                                 activation_67[0][0]              
                                                                 activation_68[0][0]              
__________________________________________________________________________________________________
conv2d_31 (Conv2D)              (None, None, None, 6 18432       mixed1[0][0]                     
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, None, None, 6 192         conv2d_31[0][0]                  
__________________________________________________________________________________________________
activation_72 (Activation)      (None, None, None, 6 0           batch_normalization_23[0][0]     
__________________________________________________________________________________________________
conv2d_29 (Conv2D)              (None, None, None, 4 13824       mixed1[0][0]                     
__________________________________________________________________________________________________
conv2d_32 (Conv2D)              (None, None, None, 9 55296       activation_72[0][0]              
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, None, None, 4 144         conv2d_29[0][0]                  
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, None, None, 9 288         conv2d_32[0][0]                  
__________________________________________________________________________________________________
activation_70 (Activation)      (None, None, None, 4 0           batch_normalization_21[0][0]     
__________________________________________________________________________________________________
activation_73 (Activation)      (None, None, None, 9 0           batch_normalization_24[0][0]     
__________________________________________________________________________________________________
average_pooling2d_3 (AveragePoo (None, None, None, 2 0           mixed1[0][0]                     
__________________________________________________________________________________________________
conv2d_28 (Conv2D)              (None, None, None, 6 18432       mixed1[0][0]                     
__________________________________________________________________________________________________
conv2d_30 (Conv2D)              (None, None, None, 6 76800       activation_70[0][0]              
__________________________________________________________________________________________________
conv2d_33 (Conv2D)              (None, None, None, 9 82944       activation_73[0][0]              
__________________________________________________________________________________________________
conv2d_34 (Conv2D)              (None, None, None, 6 18432       average_pooling2d_3[0][0]        
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, None, None, 6 192         conv2d_28[0][0]                  
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, None, None, 6 192         conv2d_30[0][0]                  
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, None, None, 9 288         conv2d_33[0][0]                  
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, None, None, 6 192         conv2d_34[0][0]                  
__________________________________________________________________________________________________
activation_69 (Activation)      (None, None, None, 6 0           batch_normalization_20[0][0]     
__________________________________________________________________________________________________
activation_71 (Activation)      (None, None, None, 6 0           batch_normalization_22[0][0]     
__________________________________________________________________________________________________
activation_74 (Activation)      (None, None, None, 9 0           batch_normalization_25[0][0]     
__________________________________________________________________________________________________
activation_75 (Activation)      (None, None, None, 6 0           batch_normalization_26[0][0]     
__________________________________________________________________________________________________
mixed2 (Concatenate)            (None, None, None, 2 0           activation_69[0][0]              
                                                                 activation_71[0][0]              
                                                                 activation_74[0][0]              
                                                                 activation_75[0][0]              
__________________________________________________________________________________________________
conv2d_36 (Conv2D)              (None, None, None, 6 18432       mixed2[0][0]                     
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, None, None, 6 192         conv2d_36[0][0]                  
__________________________________________________________________________________________________
activation_77 (Activation)      (None, None, None, 6 0           batch_normalization_28[0][0]     
__________________________________________________________________________________________________
conv2d_37 (Conv2D)              (None, None, None, 9 55296       activation_77[0][0]              
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, None, None, 9 288         conv2d_37[0][0]                  
__________________________________________________________________________________________________
activation_78 (Activation)      (None, None, None, 9 0           batch_normalization_29[0][0]     
__________________________________________________________________________________________________
conv2d_35 (Conv2D)              (None, None, None, 3 995328      mixed2[0][0]                     
__________________________________________________________________________________________________
conv2d_38 (Conv2D)              (None, None, None, 9 82944       activation_78[0][0]              
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, None, None, 3 1152        conv2d_35[0][0]                  
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, None, None, 9 288         conv2d_38[0][0]                  
__________________________________________________________________________________________________
activation_76 (Activation)      (None, None, None, 3 0           batch_normalization_27[0][0]     
__________________________________________________________________________________________________
activation_79 (Activation)      (None, None, None, 9 0           batch_normalization_30[0][0]     
__________________________________________________________________________________________________
max_pooling2d_8 (MaxPooling2D)  (None, None, None, 2 0           mixed2[0][0]                     
__________________________________________________________________________________________________
mixed3 (Concatenate)            (None, None, None, 7 0           activation_76[0][0]              
                                                                 activation_79[0][0]              
                                                                 max_pooling2d_8[0][0]            
__________________________________________________________________________________________________
conv2d_43 (Conv2D)              (None, None, None, 1 98304       mixed3[0][0]                     
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, None, None, 1 384         conv2d_43[0][0]                  
__________________________________________________________________________________________________
activation_84 (Activation)      (None, None, None, 1 0           batch_normalization_35[0][0]     
__________________________________________________________________________________________________
conv2d_44 (Conv2D)              (None, None, None, 1 114688      activation_84[0][0]              
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, None, None, 1 384         conv2d_44[0][0]                  
__________________________________________________________________________________________________
activation_85 (Activation)      (None, None, None, 1 0           batch_normalization_36[0][0]     
__________________________________________________________________________________________________
conv2d_40 (Conv2D)              (None, None, None, 1 98304       mixed3[0][0]                     
__________________________________________________________________________________________________
conv2d_45 (Conv2D)              (None, None, None, 1 114688      activation_85[0][0]              
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, None, None, 1 384         conv2d_40[0][0]                  
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, None, None, 1 384         conv2d_45[0][0]                  
__________________________________________________________________________________________________
activation_81 (Activation)      (None, None, None, 1 0           batch_normalization_32[0][0]     
__________________________________________________________________________________________________
activation_86 (Activation)      (None, None, None, 1 0           batch_normalization_37[0][0]     
__________________________________________________________________________________________________
conv2d_41 (Conv2D)              (None, None, None, 1 114688      activation_81[0][0]              
__________________________________________________________________________________________________
conv2d_46 (Conv2D)              (None, None, None, 1 114688      activation_86[0][0]              
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, None, None, 1 384         conv2d_41[0][0]                  
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, None, None, 1 384         conv2d_46[0][0]                  
__________________________________________________________________________________________________
activation_82 (Activation)      (None, None, None, 1 0           batch_normalization_33[0][0]     
__________________________________________________________________________________________________
activation_87 (Activation)      (None, None, None, 1 0           batch_normalization_38[0][0]     
__________________________________________________________________________________________________
average_pooling2d_4 (AveragePoo (None, None, None, 7 0           mixed3[0][0]                     
__________________________________________________________________________________________________
conv2d_39 (Conv2D)              (None, None, None, 1 147456      mixed3[0][0]                     
__________________________________________________________________________________________________
conv2d_42 (Conv2D)              (None, None, None, 1 172032      activation_82[0][0]              
__________________________________________________________________________________________________
conv2d_47 (Conv2D)              (None, None, None, 1 172032      activation_87[0][0]              
__________________________________________________________________________________________________
conv2d_48 (Conv2D)              (None, None, None, 1 147456      average_pooling2d_4[0][0]        
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, None, None, 1 576         conv2d_39[0][0]                  
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, None, None, 1 576         conv2d_42[0][0]                  
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, None, None, 1 576         conv2d_47[0][0]                  
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, None, None, 1 576         conv2d_48[0][0]                  
__________________________________________________________________________________________________
activation_80 (Activation)      (None, None, None, 1 0           batch_normalization_31[0][0]     
__________________________________________________________________________________________________
activation_83 (Activation)      (None, None, None, 1 0           batch_normalization_34[0][0]     
__________________________________________________________________________________________________
activation_88 (Activation)      (None, None, None, 1 0           batch_normalization_39[0][0]     
__________________________________________________________________________________________________
activation_89 (Activation)      (None, None, None, 1 0           batch_normalization_40[0][0]     
__________________________________________________________________________________________________
mixed4 (Concatenate)            (None, None, None, 7 0           activation_80[0][0]              
                                                                 activation_83[0][0]              
                                                                 activation_88[0][0]              
                                                                 activation_89[0][0]              
__________________________________________________________________________________________________
conv2d_53 (Conv2D)              (None, None, None, 1 122880      mixed4[0][0]                     
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, None, None, 1 480         conv2d_53[0][0]                  
__________________________________________________________________________________________________
activation_94 (Activation)      (None, None, None, 1 0           batch_normalization_45[0][0]     
__________________________________________________________________________________________________
conv2d_54 (Conv2D)              (None, None, None, 1 179200      activation_94[0][0]              
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, None, None, 1 480         conv2d_54[0][0]                  
__________________________________________________________________________________________________
activation_95 (Activation)      (None, None, None, 1 0           batch_normalization_46[0][0]     
__________________________________________________________________________________________________
conv2d_50 (Conv2D)              (None, None, None, 1 122880      mixed4[0][0]                     
__________________________________________________________________________________________________
conv2d_55 (Conv2D)              (None, None, None, 1 179200      activation_95[0][0]              
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, None, None, 1 480         conv2d_50[0][0]                  
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, None, None, 1 480         conv2d_55[0][0]                  
__________________________________________________________________________________________________
activation_91 (Activation)      (None, None, None, 1 0           batch_normalization_42[0][0]     
__________________________________________________________________________________________________
activation_96 (Activation)      (None, None, None, 1 0           batch_normalization_47[0][0]     
__________________________________________________________________________________________________
conv2d_51 (Conv2D)              (None, None, None, 1 179200      activation_91[0][0]              
__________________________________________________________________________________________________
conv2d_56 (Conv2D)              (None, None, None, 1 179200      activation_96[0][0]              
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, None, None, 1 480         conv2d_51[0][0]                  
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, None, None, 1 480         conv2d_56[0][0]                  
__________________________________________________________________________________________________
activation_92 (Activation)      (None, None, None, 1 0           batch_normalization_43[0][0]     
__________________________________________________________________________________________________
activation_97 (Activation)      (None, None, None, 1 0           batch_normalization_48[0][0]     
__________________________________________________________________________________________________
average_pooling2d_5 (AveragePoo (None, None, None, 7 0           mixed4[0][0]                     
__________________________________________________________________________________________________
conv2d_49 (Conv2D)              (None, None, None, 1 147456      mixed4[0][0]                     
__________________________________________________________________________________________________
conv2d_52 (Conv2D)              (None, None, None, 1 215040      activation_92[0][0]              
__________________________________________________________________________________________________
conv2d_57 (Conv2D)              (None, None, None, 1 215040      activation_97[0][0]              
__________________________________________________________________________________________________
conv2d_58 (Conv2D)              (None, None, None, 1 147456      average_pooling2d_5[0][0]        
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, None, None, 1 576         conv2d_49[0][0]                  
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, None, None, 1 576         conv2d_52[0][0]                  
__________________________________________________________________________________________________
batch_normalization_49 (BatchNo (None, None, None, 1 576         conv2d_57[0][0]                  
__________________________________________________________________________________________________
batch_normalization_50 (BatchNo (None, None, None, 1 576         conv2d_58[0][0]                  
__________________________________________________________________________________________________
activation_90 (Activation)      (None, None, None, 1 0           batch_normalization_41[0][0]     
__________________________________________________________________________________________________
activation_93 (Activation)      (None, None, None, 1 0           batch_normalization_44[0][0]     
__________________________________________________________________________________________________
activation_98 (Activation)      (None, None, None, 1 0           batch_normalization_49[0][0]     
__________________________________________________________________________________________________
activation_99 (Activation)      (None, None, None, 1 0           batch_normalization_50[0][0]     
__________________________________________________________________________________________________
mixed5 (Concatenate)            (None, None, None, 7 0           activation_90[0][0]              
                                                                 activation_93[0][0]              
                                                                 activation_98[0][0]              
                                                                 activation_99[0][0]              
__________________________________________________________________________________________________
conv2d_63 (Conv2D)              (None, None, None, 1 122880      mixed5[0][0]                     
__________________________________________________________________________________________________
batch_normalization_55 (BatchNo (None, None, None, 1 480         conv2d_63[0][0]                  
__________________________________________________________________________________________________
activation_104 (Activation)     (None, None, None, 1 0           batch_normalization_55[0][0]     
__________________________________________________________________________________________________
conv2d_64 (Conv2D)              (None, None, None, 1 179200      activation_104[0][0]             
__________________________________________________________________________________________________
batch_normalization_56 (BatchNo (None, None, None, 1 480         conv2d_64[0][0]                  
__________________________________________________________________________________________________
activation_105 (Activation)     (None, None, None, 1 0           batch_normalization_56[0][0]     
__________________________________________________________________________________________________
conv2d_60 (Conv2D)              (None, None, None, 1 122880      mixed5[0][0]                     
__________________________________________________________________________________________________
conv2d_65 (Conv2D)              (None, None, None, 1 179200      activation_105[0][0]             
__________________________________________________________________________________________________
batch_normalization_52 (BatchNo (None, None, None, 1 480         conv2d_60[0][0]                  
__________________________________________________________________________________________________
batch_normalization_57 (BatchNo (None, None, None, 1 480         conv2d_65[0][0]                  
__________________________________________________________________________________________________
activation_101 (Activation)     (None, None, None, 1 0           batch_normalization_52[0][0]     
__________________________________________________________________________________________________
activation_106 (Activation)     (None, None, None, 1 0           batch_normalization_57[0][0]     
__________________________________________________________________________________________________
conv2d_61 (Conv2D)              (None, None, None, 1 179200      activation_101[0][0]             
__________________________________________________________________________________________________
conv2d_66 (Conv2D)              (None, None, None, 1 179200      activation_106[0][0]             
__________________________________________________________________________________________________
batch_normalization_53 (BatchNo (None, None, None, 1 480         conv2d_61[0][0]                  
__________________________________________________________________________________________________
batch_normalization_58 (BatchNo (None, None, None, 1 480         conv2d_66[0][0]                  
__________________________________________________________________________________________________
activation_102 (Activation)     (None, None, None, 1 0           batch_normalization_53[0][0]     
__________________________________________________________________________________________________
activation_107 (Activation)     (None, None, None, 1 0           batch_normalization_58[0][0]     
__________________________________________________________________________________________________
average_pooling2d_6 (AveragePoo (None, None, None, 7 0           mixed5[0][0]                     
__________________________________________________________________________________________________
conv2d_59 (Conv2D)              (None, None, None, 1 147456      mixed5[0][0]                     
__________________________________________________________________________________________________
conv2d_62 (Conv2D)              (None, None, None, 1 215040      activation_102[0][0]             
__________________________________________________________________________________________________
conv2d_67 (Conv2D)              (None, None, None, 1 215040      activation_107[0][0]             
__________________________________________________________________________________________________
conv2d_68 (Conv2D)              (None, None, None, 1 147456      average_pooling2d_6[0][0]        
__________________________________________________________________________________________________
batch_normalization_51 (BatchNo (None, None, None, 1 576         conv2d_59[0][0]                  
__________________________________________________________________________________________________
batch_normalization_54 (BatchNo (None, None, None, 1 576         conv2d_62[0][0]                  
__________________________________________________________________________________________________
batch_normalization_59 (BatchNo (None, None, None, 1 576         conv2d_67[0][0]                  
__________________________________________________________________________________________________
batch_normalization_60 (BatchNo (None, None, None, 1 576         conv2d_68[0][0]                  
__________________________________________________________________________________________________
activation_100 (Activation)     (None, None, None, 1 0           batch_normalization_51[0][0]     
__________________________________________________________________________________________________
activation_103 (Activation)     (None, None, None, 1 0           batch_normalization_54[0][0]     
__________________________________________________________________________________________________
activation_108 (Activation)     (None, None, None, 1 0           batch_normalization_59[0][0]     
__________________________________________________________________________________________________
activation_109 (Activation)     (None, None, None, 1 0           batch_normalization_60[0][0]     
__________________________________________________________________________________________________
mixed6 (Concatenate)            (None, None, None, 7 0           activation_100[0][0]             
                                                                 activation_103[0][0]             
                                                                 activation_108[0][0]             
                                                                 activation_109[0][0]             
__________________________________________________________________________________________________
conv2d_73 (Conv2D)              (None, None, None, 1 147456      mixed6[0][0]                     
__________________________________________________________________________________________________
batch_normalization_65 (BatchNo (None, None, None, 1 576         conv2d_73[0][0]                  
__________________________________________________________________________________________________
activation_114 (Activation)     (None, None, None, 1 0           batch_normalization_65[0][0]     
__________________________________________________________________________________________________
conv2d_74 (Conv2D)              (None, None, None, 1 258048      activation_114[0][0]             
__________________________________________________________________________________________________
batch_normalization_66 (BatchNo (None, None, None, 1 576         conv2d_74[0][0]                  
__________________________________________________________________________________________________
activation_115 (Activation)     (None, None, None, 1 0           batch_normalization_66[0][0]     
__________________________________________________________________________________________________
conv2d_70 (Conv2D)              (None, None, None, 1 147456      mixed6[0][0]                     
__________________________________________________________________________________________________
conv2d_75 (Conv2D)              (None, None, None, 1 258048      activation_115[0][0]             
__________________________________________________________________________________________________
batch_normalization_62 (BatchNo (None, None, None, 1 576         conv2d_70[0][0]                  
__________________________________________________________________________________________________
batch_normalization_67 (BatchNo (None, None, None, 1 576         conv2d_75[0][0]                  
__________________________________________________________________________________________________
activation_111 (Activation)     (None, None, None, 1 0           batch_normalization_62[0][0]     
__________________________________________________________________________________________________
activation_116 (Activation)     (None, None, None, 1 0           batch_normalization_67[0][0]     
__________________________________________________________________________________________________
conv2d_71 (Conv2D)              (None, None, None, 1 258048      activation_111[0][0]             
__________________________________________________________________________________________________
conv2d_76 (Conv2D)              (None, None, None, 1 258048      activation_116[0][0]             
__________________________________________________________________________________________________
batch_normalization_63 (BatchNo (None, None, None, 1 576         conv2d_71[0][0]                  
__________________________________________________________________________________________________
batch_normalization_68 (BatchNo (None, None, None, 1 576         conv2d_76[0][0]                  
__________________________________________________________________________________________________
activation_112 (Activation)     (None, None, None, 1 0           batch_normalization_63[0][0]     
__________________________________________________________________________________________________
activation_117 (Activation)     (None, None, None, 1 0           batch_normalization_68[0][0]     
__________________________________________________________________________________________________
average_pooling2d_7 (AveragePoo (None, None, None, 7 0           mixed6[0][0]                     
__________________________________________________________________________________________________
conv2d_69 (Conv2D)              (None, None, None, 1 147456      mixed6[0][0]                     
__________________________________________________________________________________________________
conv2d_72 (Conv2D)              (None, None, None, 1 258048      activation_112[0][0]             
__________________________________________________________________________________________________
conv2d_77 (Conv2D)              (None, None, None, 1 258048      activation_117[0][0]             
__________________________________________________________________________________________________
conv2d_78 (Conv2D)              (None, None, None, 1 147456      average_pooling2d_7[0][0]        
__________________________________________________________________________________________________
batch_normalization_61 (BatchNo (None, None, None, 1 576         conv2d_69[0][0]                  
__________________________________________________________________________________________________
batch_normalization_64 (BatchNo (None, None, None, 1 576         conv2d_72[0][0]                  
__________________________________________________________________________________________________
batch_normalization_69 (BatchNo (None, None, None, 1 576         conv2d_77[0][0]                  
__________________________________________________________________________________________________
batch_normalization_70 (BatchNo (None, None, None, 1 576         conv2d_78[0][0]                  
__________________________________________________________________________________________________
activation_110 (Activation)     (None, None, None, 1 0           batch_normalization_61[0][0]     
__________________________________________________________________________________________________
activation_113 (Activation)     (None, None, None, 1 0           batch_normalization_64[0][0]     
__________________________________________________________________________________________________
activation_118 (Activation)     (None, None, None, 1 0           batch_normalization_69[0][0]     
__________________________________________________________________________________________________
activation_119 (Activation)     (None, None, None, 1 0           batch_normalization_70[0][0]     
__________________________________________________________________________________________________
mixed7 (Concatenate)            (None, None, None, 7 0           activation_110[0][0]             
                                                                 activation_113[0][0]             
                                                                 activation_118[0][0]             
                                                                 activation_119[0][0]             
__________________________________________________________________________________________________
conv2d_81 (Conv2D)              (None, None, None, 1 147456      mixed7[0][0]                     
__________________________________________________________________________________________________
batch_normalization_73 (BatchNo (None, None, None, 1 576         conv2d_81[0][0]                  
__________________________________________________________________________________________________
activation_122 (Activation)     (None, None, None, 1 0           batch_normalization_73[0][0]     
__________________________________________________________________________________________________
conv2d_82 (Conv2D)              (None, None, None, 1 258048      activation_122[0][0]             
__________________________________________________________________________________________________
batch_normalization_74 (BatchNo (None, None, None, 1 576         conv2d_82[0][0]                  
__________________________________________________________________________________________________
activation_123 (Activation)     (None, None, None, 1 0           batch_normalization_74[0][0]     
__________________________________________________________________________________________________
conv2d_79 (Conv2D)              (None, None, None, 1 147456      mixed7[0][0]                     
__________________________________________________________________________________________________
conv2d_83 (Conv2D)              (None, None, None, 1 258048      activation_123[0][0]             
__________________________________________________________________________________________________
batch_normalization_71 (BatchNo (None, None, None, 1 576         conv2d_79[0][0]                  
__________________________________________________________________________________________________
batch_normalization_75 (BatchNo (None, None, None, 1 576         conv2d_83[0][0]                  
__________________________________________________________________________________________________
activation_120 (Activation)     (None, None, None, 1 0           batch_normalization_71[0][0]     
__________________________________________________________________________________________________
activation_124 (Activation)     (None, None, None, 1 0           batch_normalization_75[0][0]     
__________________________________________________________________________________________________
conv2d_80 (Conv2D)              (None, None, None, 3 552960      activation_120[0][0]             
__________________________________________________________________________________________________
conv2d_84 (Conv2D)              (None, None, None, 1 331776      activation_124[0][0]             
__________________________________________________________________________________________________
batch_normalization_72 (BatchNo (None, None, None, 3 960         conv2d_80[0][0]                  
__________________________________________________________________________________________________
batch_normalization_76 (BatchNo (None, None, None, 1 576         conv2d_84[0][0]                  
__________________________________________________________________________________________________
activation_121 (Activation)     (None, None, None, 3 0           batch_normalization_72[0][0]     
__________________________________________________________________________________________________
activation_125 (Activation)     (None, None, None, 1 0           batch_normalization_76[0][0]     
__________________________________________________________________________________________________
max_pooling2d_9 (MaxPooling2D)  (None, None, None, 7 0           mixed7[0][0]                     
__________________________________________________________________________________________________
mixed8 (Concatenate)            (None, None, None, 1 0           activation_121[0][0]             
                                                                 activation_125[0][0]             
                                                                 max_pooling2d_9[0][0]            
__________________________________________________________________________________________________
conv2d_89 (Conv2D)              (None, None, None, 4 573440      mixed8[0][0]                     
__________________________________________________________________________________________________
batch_normalization_81 (BatchNo (None, None, None, 4 1344        conv2d_89[0][0]                  
__________________________________________________________________________________________________
activation_130 (Activation)     (None, None, None, 4 0           batch_normalization_81[0][0]     
__________________________________________________________________________________________________
conv2d_86 (Conv2D)              (None, None, None, 3 491520      mixed8[0][0]                     
__________________________________________________________________________________________________
conv2d_90 (Conv2D)              (None, None, None, 3 1548288     activation_130[0][0]             
__________________________________________________________________________________________________
batch_normalization_78 (BatchNo (None, None, None, 3 1152        conv2d_86[0][0]                  
__________________________________________________________________________________________________
batch_normalization_82 (BatchNo (None, None, None, 3 1152        conv2d_90[0][0]                  
__________________________________________________________________________________________________
activation_127 (Activation)     (None, None, None, 3 0           batch_normalization_78[0][0]     
__________________________________________________________________________________________________
activation_131 (Activation)     (None, None, None, 3 0           batch_normalization_82[0][0]     
__________________________________________________________________________________________________
conv2d_87 (Conv2D)              (None, None, None, 3 442368      activation_127[0][0]             
__________________________________________________________________________________________________
conv2d_88 (Conv2D)              (None, None, None, 3 442368      activation_127[0][0]             
__________________________________________________________________________________________________
conv2d_91 (Conv2D)              (None, None, None, 3 442368      activation_131[0][0]             
__________________________________________________________________________________________________
conv2d_92 (Conv2D)              (None, None, None, 3 442368      activation_131[0][0]             
__________________________________________________________________________________________________
average_pooling2d_8 (AveragePoo (None, None, None, 1 0           mixed8[0][0]                     
__________________________________________________________________________________________________
conv2d_85 (Conv2D)              (None, None, None, 3 409600      mixed8[0][0]                     
__________________________________________________________________________________________________
batch_normalization_79 (BatchNo (None, None, None, 3 1152        conv2d_87[0][0]                  
__________________________________________________________________________________________________
batch_normalization_80 (BatchNo (None, None, None, 3 1152        conv2d_88[0][0]                  
__________________________________________________________________________________________________
batch_normalization_83 (BatchNo (None, None, None, 3 1152        conv2d_91[0][0]                  
__________________________________________________________________________________________________
batch_normalization_84 (BatchNo (None, None, None, 3 1152        conv2d_92[0][0]                  
__________________________________________________________________________________________________
conv2d_93 (Conv2D)              (None, None, None, 1 245760      average_pooling2d_8[0][0]        
__________________________________________________________________________________________________
batch_normalization_77 (BatchNo (None, None, None, 3 960         conv2d_85[0][0]                  
__________________________________________________________________________________________________
activation_128 (Activation)     (None, None, None, 3 0           batch_normalization_79[0][0]     
__________________________________________________________________________________________________
activation_129 (Activation)     (None, None, None, 3 0           batch_normalization_80[0][0]     
__________________________________________________________________________________________________
activation_132 (Activation)     (None, None, None, 3 0           batch_normalization_83[0][0]     
__________________________________________________________________________________________________
activation_133 (Activation)     (None, None, None, 3 0           batch_normalization_84[0][0]     
__________________________________________________________________________________________________
batch_normalization_85 (BatchNo (None, None, None, 1 576         conv2d_93[0][0]                  
__________________________________________________________________________________________________
activation_126 (Activation)     (None, None, None, 3 0           batch_normalization_77[0][0]     
__________________________________________________________________________________________________
mixed9_0 (Concatenate)          (None, None, None, 7 0           activation_128[0][0]             
                                                                 activation_129[0][0]             
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, None, None, 7 0           activation_132[0][0]             
                                                                 activation_133[0][0]             
__________________________________________________________________________________________________
activation_134 (Activation)     (None, None, None, 1 0           batch_normalization_85[0][0]     
__________________________________________________________________________________________________
mixed9 (Concatenate)            (None, None, None, 2 0           activation_126[0][0]             
                                                                 mixed9_0[0][0]                   
                                                                 concatenate_1[0][0]              
                                                                 activation_134[0][0]             
__________________________________________________________________________________________________
conv2d_98 (Conv2D)              (None, None, None, 4 917504      mixed9[0][0]                     
__________________________________________________________________________________________________
batch_normalization_90 (BatchNo (None, None, None, 4 1344        conv2d_98[0][0]                  
__________________________________________________________________________________________________
activation_139 (Activation)     (None, None, None, 4 0           batch_normalization_90[0][0]     
__________________________________________________________________________________________________
conv2d_95 (Conv2D)              (None, None, None, 3 786432      mixed9[0][0]                     
__________________________________________________________________________________________________
conv2d_99 (Conv2D)              (None, None, None, 3 1548288     activation_139[0][0]             
__________________________________________________________________________________________________
batch_normalization_87 (BatchNo (None, None, None, 3 1152        conv2d_95[0][0]                  
__________________________________________________________________________________________________
batch_normalization_91 (BatchNo (None, None, None, 3 1152        conv2d_99[0][0]                  
__________________________________________________________________________________________________
activation_136 (Activation)     (None, None, None, 3 0           batch_normalization_87[0][0]     
__________________________________________________________________________________________________
activation_140 (Activation)     (None, None, None, 3 0           batch_normalization_91[0][0]     
__________________________________________________________________________________________________
conv2d_96 (Conv2D)              (None, None, None, 3 442368      activation_136[0][0]             
__________________________________________________________________________________________________
conv2d_97 (Conv2D)              (None, None, None, 3 442368      activation_136[0][0]             
__________________________________________________________________________________________________
conv2d_100 (Conv2D)             (None, None, None, 3 442368      activation_140[0][0]             
__________________________________________________________________________________________________
conv2d_101 (Conv2D)             (None, None, None, 3 442368      activation_140[0][0]             
__________________________________________________________________________________________________
average_pooling2d_9 (AveragePoo (None, None, None, 2 0           mixed9[0][0]                     
__________________________________________________________________________________________________
conv2d_94 (Conv2D)              (None, None, None, 3 655360      mixed9[0][0]                     
__________________________________________________________________________________________________
batch_normalization_88 (BatchNo (None, None, None, 3 1152        conv2d_96[0][0]                  
__________________________________________________________________________________________________
batch_normalization_89 (BatchNo (None, None, None, 3 1152        conv2d_97[0][0]                  
__________________________________________________________________________________________________
batch_normalization_92 (BatchNo (None, None, None, 3 1152        conv2d_100[0][0]                 
__________________________________________________________________________________________________
batch_normalization_93 (BatchNo (None, None, None, 3 1152        conv2d_101[0][0]                 
__________________________________________________________________________________________________
conv2d_102 (Conv2D)             (None, None, None, 1 393216      average_pooling2d_9[0][0]        
__________________________________________________________________________________________________
batch_normalization_86 (BatchNo (None, None, None, 3 960         conv2d_94[0][0]                  
__________________________________________________________________________________________________
activation_137 (Activation)     (None, None, None, 3 0           batch_normalization_88[0][0]     
__________________________________________________________________________________________________
activation_138 (Activation)     (None, None, None, 3 0           batch_normalization_89[0][0]     
__________________________________________________________________________________________________
activation_141 (Activation)     (None, None, None, 3 0           batch_normalization_92[0][0]     
__________________________________________________________________________________________________
activation_142 (Activation)     (None, None, None, 3 0           batch_normalization_93[0][0]     
__________________________________________________________________________________________________
batch_normalization_94 (BatchNo (None, None, None, 1 576         conv2d_102[0][0]                 
__________________________________________________________________________________________________
activation_135 (Activation)     (None, None, None, 3 0           batch_normalization_86[0][0]     
__________________________________________________________________________________________________
mixed9_1 (Concatenate)          (None, None, None, 7 0           activation_137[0][0]             
                                                                 activation_138[0][0]             
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, None, None, 7 0           activation_141[0][0]             
                                                                 activation_142[0][0]             
__________________________________________________________________________________________________
activation_143 (Activation)     (None, None, None, 1 0           batch_normalization_94[0][0]     
__________________________________________________________________________________________________
mixed10 (Concatenate)           (None, None, None, 2 0           activation_135[0][0]             
                                                                 mixed9_1[0][0]                   
                                                                 concatenate_2[0][0]              
                                                                 activation_143[0][0]             
__________________________________________________________________________________________________
g_avg_pooling_2d (GlobalAverage (None, 2048)         0           mixed10[0][0]                    
__________________________________________________________________________________________________
custom_fc1 (Dense)              (None, 133)          272517      g_avg_pooling_2d[0][0]           
==================================================================================================
Total params: 22,075,301
Trainable params: 272,517
Non-trainable params: 21,802,784
__________________________________________________________________________________________________
Train on 6680 samples, validate on 835 samples
Epoch 1/50
6680/6680 [==============================] - 21s 3ms/step - loss: 2.7427 - acc: 0.4075 - val_loss: 1.0279 - val_acc: 0.7234

Epoch 00001: val_loss improved from inf to 1.02794, saving model to saved_models/inceptionV3_dogbreed.h5
Epoch 2/50
6680/6680 [==============================] - 17s 3ms/step - loss: 1.3345 - acc: 0.6593 - val_loss: 0.9764 - val_acc: 0.7689

Epoch 00002: val_loss improved from 1.02794 to 0.97636, saving model to saved_models/inceptionV3_dogbreed.h5
Epoch 3/50
6680/6680 [==============================] - 17s 3ms/step - loss: 1.0037 - acc: 0.7250 - val_loss: 0.9575 - val_acc: 0.7892

Epoch 00003: val_loss improved from 0.97636 to 0.95753, saving model to saved_models/inceptionV3_dogbreed.h5
Epoch 4/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.7989 - acc: 0.7738 - val_loss: 1.0579 - val_acc: 0.7796

Epoch 00004: val_loss did not improve from 0.95753
Epoch 5/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.6753 - acc: 0.8085 - val_loss: 1.0455 - val_acc: 0.7916

Epoch 00005: val_loss did not improve from 0.95753
Epoch 6/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.5793 - acc: 0.8322 - val_loss: 1.0392 - val_acc: 0.8060

Epoch 00006: val_loss did not improve from 0.95753
Epoch 7/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.5003 - acc: 0.8531 - val_loss: 1.1696 - val_acc: 0.7964

Epoch 00007: val_loss did not improve from 0.95753
Epoch 8/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.4648 - acc: 0.8674 - val_loss: 1.2392 - val_acc: 0.7976

Epoch 00008: val_loss did not improve from 0.95753
Epoch 9/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.3971 - acc: 0.8871 - val_loss: 1.1518 - val_acc: 0.8036

Epoch 00009: val_loss did not improve from 0.95753
Epoch 10/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.3588 - acc: 0.8954 - val_loss: 1.2577 - val_acc: 0.7928

Epoch 00010: val_loss did not improve from 0.95753
Epoch 11/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.3372 - acc: 0.9021 - val_loss: 1.2629 - val_acc: 0.7892

Epoch 00011: val_loss did not improve from 0.95753
Epoch 12/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.3041 - acc: 0.9097 - val_loss: 1.2606 - val_acc: 0.7940

Epoch 00012: val_loss did not improve from 0.95753
Epoch 13/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.2866 - acc: 0.9145 - val_loss: 1.3027 - val_acc: 0.8048

Epoch 00013: val_loss did not improve from 0.95753
Epoch 14/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.2618 - acc: 0.9184 - val_loss: 1.3185 - val_acc: 0.7952

Epoch 00014: val_loss did not improve from 0.95753
Epoch 15/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.2612 - acc: 0.9217 - val_loss: 1.3432 - val_acc: 0.7904

Epoch 00015: val_loss did not improve from 0.95753
Epoch 16/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.2328 - acc: 0.9278 - val_loss: 1.3404 - val_acc: 0.7976

Epoch 00016: val_loss did not improve from 0.95753
Epoch 17/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.2281 - acc: 0.9298 - val_loss: 1.4027 - val_acc: 0.7916

Epoch 00017: val_loss did not improve from 0.95753
Epoch 18/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.2003 - acc: 0.9404 - val_loss: 1.4359 - val_acc: 0.7940

Epoch 00018: val_loss did not improve from 0.95753
Epoch 19/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.2051 - acc: 0.9367 - val_loss: 1.3612 - val_acc: 0.8072

Epoch 00019: val_loss did not improve from 0.95753
Epoch 20/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1993 - acc: 0.9430 - val_loss: 1.3539 - val_acc: 0.8036

Epoch 00020: val_loss did not improve from 0.95753
Epoch 21/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1792 - acc: 0.9482 - val_loss: 1.4577 - val_acc: 0.7988

Epoch 00021: val_loss did not improve from 0.95753
Epoch 22/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1687 - acc: 0.9467 - val_loss: 1.4844 - val_acc: 0.7988

Epoch 00022: val_loss did not improve from 0.95753
Epoch 23/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1698 - acc: 0.9500 - val_loss: 1.3803 - val_acc: 0.8000

Epoch 00023: val_loss did not improve from 0.95753
Epoch 24/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1622 - acc: 0.9513 - val_loss: 1.4106 - val_acc: 0.7892

Epoch 00024: val_loss did not improve from 0.95753
Epoch 25/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1568 - acc: 0.9510 - val_loss: 1.5632 - val_acc: 0.7952

Epoch 00025: val_loss did not improve from 0.95753
Epoch 26/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1512 - acc: 0.9519 - val_loss: 1.5046 - val_acc: 0.8000

Epoch 00026: val_loss did not improve from 0.95753
Epoch 27/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1430 - acc: 0.9560 - val_loss: 1.5843 - val_acc: 0.7880

Epoch 00027: val_loss did not improve from 0.95753
Epoch 28/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1385 - acc: 0.9575 - val_loss: 1.4930 - val_acc: 0.7892

Epoch 00028: val_loss did not improve from 0.95753
Epoch 29/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1395 - acc: 0.9582 - val_loss: 1.5154 - val_acc: 0.8144

Epoch 00029: val_loss did not improve from 0.95753
Epoch 30/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1401 - acc: 0.9549 - val_loss: 1.5270 - val_acc: 0.7976

Epoch 00030: val_loss did not improve from 0.95753
Epoch 31/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1273 - acc: 0.9594 - val_loss: 1.5441 - val_acc: 0.7952

Epoch 00031: val_loss did not improve from 0.95753
Epoch 32/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1260 - acc: 0.9602 - val_loss: 1.6165 - val_acc: 0.7928

Epoch 00032: val_loss did not improve from 0.95753
Epoch 33/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1333 - acc: 0.9564 - val_loss: 1.5966 - val_acc: 0.7976

Epoch 00033: val_loss did not improve from 0.95753
Epoch 34/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1231 - acc: 0.9639 - val_loss: 1.5543 - val_acc: 0.8036

Epoch 00034: val_loss did not improve from 0.95753
Epoch 35/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1213 - acc: 0.9651 - val_loss: 1.5289 - val_acc: 0.7928

Epoch 00035: val_loss did not improve from 0.95753
Epoch 36/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1225 - acc: 0.9611 - val_loss: 1.5499 - val_acc: 0.8036

Epoch 00036: val_loss did not improve from 0.95753
Epoch 37/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1235 - acc: 0.9639 - val_loss: 1.6040 - val_acc: 0.7880

Epoch 00037: val_loss did not improve from 0.95753
Epoch 38/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1053 - acc: 0.9665 - val_loss: 1.5165 - val_acc: 0.8072

Epoch 00038: val_loss did not improve from 0.95753
Epoch 39/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1146 - acc: 0.9648 - val_loss: 1.5837 - val_acc: 0.7952

Epoch 00039: val_loss did not improve from 0.95753
Epoch 40/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1094 - acc: 0.9675 - val_loss: 1.6884 - val_acc: 0.7940

Epoch 00040: val_loss did not improve from 0.95753
Epoch 41/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1021 - acc: 0.9672 - val_loss: 1.5768 - val_acc: 0.7976

Epoch 00041: val_loss did not improve from 0.95753
Epoch 42/50
6680/6680 [==============================] - 18s 3ms/step - loss: 0.1094 - acc: 0.9659 - val_loss: 1.6578 - val_acc: 0.8072

Epoch 00042: val_loss did not improve from 0.95753
Epoch 43/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.1124 - acc: 0.9659 - val_loss: 1.7173 - val_acc: 0.7832

Epoch 00043: val_loss did not improve from 0.95753
Epoch 44/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.1072 - acc: 0.9666 - val_loss: 1.6020 - val_acc: 0.8012

Epoch 00044: val_loss did not improve from 0.95753
Epoch 45/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.0935 - acc: 0.9695 - val_loss: 1.6694 - val_acc: 0.7904

Epoch 00045: val_loss did not improve from 0.95753
Epoch 46/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.1080 - acc: 0.9678 - val_loss: 1.7228 - val_acc: 0.7952

Epoch 00046: val_loss did not improve from 0.95753
Epoch 47/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.1016 - acc: 0.9686 - val_loss: 1.6807 - val_acc: 0.7964

Epoch 00047: val_loss did not improve from 0.95753
Epoch 48/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.1055 - acc: 0.9695 - val_loss: 1.7309 - val_acc: 0.7916

Epoch 00048: val_loss did not improve from 0.95753
Epoch 49/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.0870 - acc: 0.9732 - val_loss: 1.6746 - val_acc: 0.7964

Epoch 00049: val_loss did not improve from 0.95753
Epoch 50/50
6680/6680 [==============================] - 17s 3ms/step - loss: 0.0979 - acc: 0.9728 - val_loss: 1.7209 - val_acc: 0.7737

Epoch 00050: val_loss did not improve from 0.95753
Constructing predictor...

Compare dog-breed predictors

In [37]:
dog_test_input, dog_test_labels = dog_test

test_multiclass_predictors(
    predictors[PREDICTOR_TYPE_DOG_BREED], dog_test_input, dog_test_labels
)
                               custom_cnn_dogbreed: 100%|██████████| 836/836 [00:15<00:00, 73.70it/s]
                                             vgg16: 100%|██████████| 836/836 [00:13<00:00, 61.25it/s]
                                             vgg19: 100%|██████████| 836/836 [00:15<00:00, 56.08it/s]
                                       inceptionV3: 100%|██████████| 836/836 [00:22<00:00, 37.61it/s]
Out[37]:
accuracy f1_score
custom_cnn_dogbreed 0.223684 0.238902
vgg16 0.574163 0.637266
vgg19 0.566986 0.628906
inceptionV3 0.775120 0.785253

System Assembly

The best candidates based on f1_score are:

  • human predictor: haarcascade_frontalface_alt
  • dog predictor: resnet50
  • dog-breed predictor: inceptionV3

Preparing class that is capable of recreating system outside of notebook.

In [32]:
class HumanDogBreedPredictor:
    """Class which combines 'haarcascade_frontalface_alt', 'ResNet50' and finetuned 
    'InceptionV3' models into system that detects humans and dogs and returns dog breed
    for the image."""
    
    def __init__(self, human_predictor_filepath, dog_breed_predictor_filepath, dog_names):
        """Class constructor.

        Parameters:
        -----------
        human_predictor_filepath: str
            Path to .xml file of 'haarcascade_frontalface_alt' cascade.
        dog_breed_predictor_filepath: str
            Path to .h5 keras model file.
        dog_names: list
            List containing dog breed names in the same order as softmax output of
            trained dog breed precition network.
            
        Returns:
        -----------
        None
        """
        self.human_predictor = self._init_human_predictor(human_predictor_filepath)
        print("Succesfully loaded human predictor!")
        
        self.dog_predictor = self._init_dog_predictor()
        print("Succesfully loaded dog predictor!")
        
        self.dogbreed_predictor = self._init_dog_breed_predictor(dog_breed_predictor_filepath)
        print("Succesfully loaded dog_breed predictor!")
        
        self.dog_names = dog_names
        
    def _init_human_predictor(self, path):
        """Method which loads human predictor.

        Parameters:
        -----------
        path: str
            Path to .xml file of 'haarcascade_frontalface_alt' cascade.
            
        Returns:
        -----------
        human_predictor: cv2.CascadeClassifier
            Loaded cascade file wrapped in OpenCV class.
        """
        human_predictor = cv2.CascadeClassifier(path)
        return human_predictor
    
    def _init_dog_predictor(self):
        """Method which loads dog predictor.

        Parameters:
        -----------
        None
            
        Returns:
        -----------
        dog_predictor: Sequential
            Keras ResNet50 model.
        """
        dog_predictor = ResNet50(weights="imagenet")
        return dog_predictor
    
    def _init_dog_breed_predictor(self, path):
        """Method which loads dog breed predictor.

        Parameters:
        -----------
        path: str
            Path to .h5 keras model file.
            
        Returns:
        -----------
        dog_breed_predictor: Sequential
            Finetuned InceptionV3 Keras model.
        """
        dog_breed_predictor = load_model(path, compile=False)
        return dog_breed_predictor
       
    @staticmethod
    def _path_to_tensor(img_path, img_size=(224, 224)):
        """Method takes image from specific filepath, resizes it and saves as numpy.ndarray.

        Parameters:
        -----------
        img_path: str
            Filepath to image file.
        img_size: tuple
            Tuple to which loaded image will be resized.

        Returns:
        -----------
        img: numpy.ndarray
            Returns loaded and resized image.
        """
        img = image.load_img(img_path, target_size=img_size)
        img = image.img_to_array(img)
        img = np.expand_dims(img, axis=0)
        return img
    
    def _human_prediction(self, img_path):
        """For given image path returns flag whether it contains human or not.

        Parameters:
        -----------
        img_path: str
            Filepath to image file.

        Returns:
        -----------
        result: int
            Value 1 if image contains human, 0 otherwise.
        """
        img = cv2.imread(img_path)
        img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        result = int(len(self.human_predictor.detectMultiScale(img_gray)) > 0)
        return result
    
    def _dog_prediction(self, img_path):
        """For given image path returns flag whether it contains dog or not.

        Parameters:
        -----------
        img_path: str
            Filepath to image file.

        Returns:
        -----------
        result: int
            Value 1 if image contains human, 0 otherwise.
        """
        img = self._path_to_tensor(img_path)
        img = resnet50_preprocess_input(img)
        result = np.argmax(self.dog_predictor.predict(img))
        result = int((result <= 268) and (result >= 151)) 
        return result
    
    def _dogbreed_prediction(self, img_path):
        """For given image path returns id of dog breed class.

        Parameters:
        -----------
        img_path: str
            Filepath to image file.

        Returns:
        -----------
        result: int
            Id of dog breed class.
        """
        img = self._path_to_tensor(img_path)
        img = inceptionv3_preprocess_input(img)
        result = np.argmax(self.dogbreed_predictor.predict(img))
        return result
        
    def predict(self, img_path, plot=False, verbose=False):
        """Method which for given image path loads image, makes prediction
        with each prediction and based on the results returns communicate to user.
        
        Parameters:
        -----------
        img_path: str
            Filepath to image file.
        plot: bool
            If set to true, sent image will be also displayed.
        verbose: bool
            If set to true, function will display messages to user.

        Returns:
        -----------
        is_human: int
            Information whether image contains human.
        is_dog: int
            Information whether image contains dog.
        dog_breed: str
            Name of the dog breed.
        """
        print("Making prediction for: {}".format(img_path))
        
        is_human = self._human_prediction(img_path)
        is_dog = self._dog_prediction(img_path)
        dog_breed =  self.dog_names[self._dogbreed_prediction(img_path)]
        
        if plot:
            img = cv2.imread(img_path)
            cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
            plt.imshow(cv_rgb)
            plt.show()
        
        if verbose:
            if is_dog and is_human:
                print("\t- System got confused... it seems that it's a "
                      + "dog and human at the same time. Scary!")
            elif is_dog:
                print("\t- This is dog of breed: {}".format(dog_breed))
            elif is_human:
                print("\t- This is human that looks like dog of breed: {}".format(dog_breed))
            else:
                print("\t- This is neither dog or human.")
            
        return is_human, is_dog, dog_breed
In [33]:
human_predictor_dir = os.path.join(OPENCV_HAAR_CASCADES_DIR, "haarcascade_frontalface_alt.xml")
dog_breed_predictor_dir = os.path.join(SAVED_MODELS_DIR, "inceptionV3_dogbreed.h5")

system = HumanDogBreedPredictor(human_predictor_dir, dog_breed_predictor_dir, class_order)
Succesfully loaded human predictor!
Succesfully loaded dog predictor!
Succesfully loaded dog_breed predictor!

System Tests

In [34]:
for img_name in os.listdir(TEST_IMAGES_DIR):
    img_filepath = os.path.join(TEST_IMAGES_DIR, img_name)
    system.predict(img_filepath, plot=True, verbose=True)
    print("-------------------------------------")
Making prediction for: data/test_images/nondog9.jpg
	- This is neither dog or human.
-------------------------------------
Making prediction for: data/test_images/nondog3.jpg
	- This is neither dog or human.
-------------------------------------
Making prediction for: data/test_images/dog1_dalmatian.jpg
	- This is dog of breed: Dalmatian
-------------------------------------
Making prediction for: data/test_images/dog2_pug.jpg
	- This is dog of breed: Bulldog
-------------------------------------
Making prediction for: data/test_images/nondog2.jpg
	- This is neither dog or human.
-------------------------------------
Making prediction for: data/test_images/nondog10.png
	- This is neither dog or human.
-------------------------------------
Making prediction for: data/test_images/nondog5.jpeg
	- This is neither dog or human.
-------------------------------------
Making prediction for: data/test_images/nondog4.jpeg
	- This is neither dog or human.
-------------------------------------
Making prediction for: data/test_images/nondog6.png
	- This is human that looks like dog of breed: Chow_chow
-------------------------------------
Making prediction for: data/test_images/person2.jpg
	- This is human that looks like dog of breed: Briard
-------------------------------------
Making prediction for: data/test_images/person1.jpg
	- This is human that looks like dog of breed: Yorkshire_terrier
-------------------------------------
Making prediction for: data/test_images/person4.png
	- This is human that looks like dog of breed: Portuguese_water_dog
-------------------------------------
Making prediction for: data/test_images/nondog8.jpg
	- This is neither dog or human.
-------------------------------------
Making prediction for: data/test_images/dog3_doberman.jpg
	- This is dog of breed: Doberman_pinscher
-------------------------------------
Making prediction for: data/test_images/person3.png
	- This is human that looks like dog of breed: Portuguese_water_dog
-------------------------------------
Making prediction for: data/test_images/person5.jpg
	- This is human that looks like dog of breed: Lowchen
-------------------------------------
Making prediction for: data/test_images/nondog7.jpeg
	- This is neither dog or human.
-------------------------------------
Making prediction for: data/test_images/dog5_siberian_husky.jpg
	- This is dog of breed: Alaskan_malamute
-------------------------------------
Making prediction for: data/test_images/dog4_golden_retriever.jpg
	- This is dog of breed: Golden_retriever
-------------------------------------
Making prediction for: data/test_images/nondog1.jpg
	- This is neither dog or human.
-------------------------------------

Conclusions


The system is a connection of three models:

  1. Human Predictor - Haar Cascade haarcascade_frontalface_alt included in OpenCV library. It is capable of detecting human faces. When human face is found on the images, then classifier flags that human can be also found on the image. It is quite naive solution but suprisingly gives f1_score of value 0.938 on testing dataset constructed from 500 human images and 500 dog images. Other facial cascades were tested too but selected one gave best results. It is also important to note that some dog images contain humans and this is the main weakness of this solution. Apart from that is quite straightforward and lightweight.
  1. Dog Predictor - ResNet50 which is just loaded set of network weights from Keras library. Luckily ResNet50 was trained on many anmial images. It is capable of detecting various dog breeds and because of that it can be used as dog detector. If for given dog image it returns a class corresponding to dog breed then it is possible to say that the dog is present on the image. Even if dog from unkown to ResNet50 breed is used for prediction it should still give dog breed class most similar to the given dog (as dogs are in general similar to each other). So in this case training of model was not needed. Model has great performance of f1_score value equal to 0.987. Tested on dataset constructed from 500 human images and 500 dog images it managed to correctly detect 492 dogs and correclty ignore 496 people. Further investigation could be made to see on which human images network managed to fail. Maybe there was a human with a dog or some dog in background.

3. Dog Breed Predictor - finetuned neural network of InceptionV3 architecture. InceptionV3 was loaded with "imagenet" weights through Keras library. Top with dense layers was thrown away and the rest was frozen. New top of neural network was constructed from GlobalAveragePooling2D layer and Dense layer with softmax output. It is to train neural network the new outputs which are dog breeds. Apart from InceptionV3, architectures like VGG16 and VGG19 were also tested. Best architecture was picked and the result is f1_score value of 0.785 on testing dataset - not available to network during training process.

Self-made test dataset results

To see how model performs on different images than humans and dogs, 20 images were picked from google search. It is of course very small amount and not enough to say if the performance is for sure good. But few conclusions can be made:

  1. System was always able to correctly detect dogs and humans.
  2. System made three mistakes:
    • mistaken Pug for Bulldog - and this might be because pug image is zoomed and it is actually similar to bulldog if you check how bullldog looks like,
    • mistaken Siberian Husky for Alaskan Malamute - again close mistake, both dogs are the same color and I personally cannot tell them apart, image of Siberian Husky has some additional lighting on the dog,
    • mistaken Lion for Chow Chow - and that one is my favourite mistake that model has made, it shows that it really picks up the similarity

  1. Comparison of Yoshua Bengio to Briard and Yan Lecun to Portugese Water Dog was somehow accurate.

Potential improvements

Machine Learning side

  1. Architectures It is important to note that current system is just a prototype, a showcase what could be done and improved in the future. The human data was not used at all in any case to teach model how to detect them. Deep learning model could be produced. I tried to play with finetuning https://github.com/rcmalli/keras-vggface but first results didn't go well and I had to make the deadline. But for sure network which is having more knowledge about how people looks like would be more stable predictor. The same goes for dog predictor. Usage of ResNet50 is just a workaround. It is not a model which was taught to especially to detect dogs.
  1. Data Amount There was ~8800 dog images. There are a lot of other datasets available with dogs. It is quite easy to get more data and make even better model. Same goes to human dataset. There are datasets meant for face detection or image generation (e.g. http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html has 200k human images).
  1. Data Quality Despite having small amount images there were cases where people were together with dogs or dogs with various breeds on the same image. Such images should be removed in current system setup.
  1. Weak Point Analysis It is important to look how model performs on each class and which classes are mixed together. For example Pug and Bulldog error case has shown that network might need more zoomed images of Pugs and Bulldogs because if it doesn't see whole dog then it is easy to make mistake just by looking at face. Similar analysis could be done for more images.
  1. System Complexity This is a very basic setup that is unable to classify images with many dogs or dogs and people. First data segmentation could be performed with some strong neural networks like Faster RCNN ResNet 101 or others (e.g. available here https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md). Of course it is a challenge to find the data which has boundary boxes attached to it. Still if such data would be found the segmentation on humans and dogs could be performed. Cropped images could go to Dog Breed predictor for output. Then it is possible for system to give multiple outputs for many different dogs/humans on the same images.

Engineering side

  1. RAM Efficiency Currently this notebook works because image datasets are small. When training neural networks all data was saved into memory. PC with RAM size 16GB started using SWAP memory anyway. And there were only 8800 images loaded. To avoid such memory load, batches should be constructed from image urls and only this batch should be loaded. Also batch size could be reduced.
  2. Code Structure Usage of Predictor class made importing/exporting of the model quite hard. It is good for comparing models and comfortable to work with. I wonder if KerasWrappers and saving whole model predict process to sklearn's Pipeline wouldn't be more effective. Then model could be pickled with the rest of it's functions.
  3. Project Structure All code should be moved to .py files in project folder. Notebook should use only imports. As all functions are in this notebook it became very hard to work with and read.
In [ ]: